=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-208098 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-208098 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m10.161800934s)
-- stdout --
* [old-k8s-version-208098] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20384
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20384-872300/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-872300/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-208098" primary control-plane node in "old-k8s-version-208098" cluster
* Pulling base image v0.0.46-1744107393-20604 ...
* Restarting existing docker container for "old-k8s-version-208098" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-208098 addons enable metrics-server
* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
-- /stdout --
** stderr **
I0414 13:22:49.800606 1087820 out.go:345] Setting OutFile to fd 1 ...
I0414 13:22:49.800739 1087820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:22:49.800750 1087820 out.go:358] Setting ErrFile to fd 2...
I0414 13:22:49.800756 1087820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:22:49.801011 1087820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-872300/.minikube/bin
I0414 13:22:49.801412 1087820 out.go:352] Setting JSON to false
I0414 13:22:49.802460 1087820 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18314,"bootTime":1744618656,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0414 13:22:49.802533 1087820 start.go:139] virtualization:
I0414 13:22:49.805779 1087820 out.go:177] * [old-k8s-version-208098] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0414 13:22:49.809583 1087820 out.go:177] - MINIKUBE_LOCATION=20384
I0414 13:22:49.809756 1087820 notify.go:220] Checking for updates...
I0414 13:22:49.815588 1087820 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0414 13:22:49.818495 1087820 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20384-872300/kubeconfig
I0414 13:22:49.821329 1087820 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-872300/.minikube
I0414 13:22:49.824779 1087820 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0414 13:22:49.827683 1087820 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0414 13:22:49.831572 1087820 config.go:182] Loaded profile config "old-k8s-version-208098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0414 13:22:49.834378 1087820 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0414 13:22:49.838045 1087820 driver.go:394] Setting default libvirt URI to qemu:///system
I0414 13:22:49.869026 1087820 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0414 13:22:49.869157 1087820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0414 13:22:49.935761 1087820 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-14 13:22:49.926442263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0414 13:22:49.935868 1087820 docker.go:318] overlay module found
I0414 13:22:49.938991 1087820 out.go:177] * Using the docker driver based on existing profile
I0414 13:22:49.941913 1087820 start.go:297] selected driver: docker
I0414 13:22:49.941935 1087820 start.go:901] validating driver "docker" against &{Name:old-k8s-version-208098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-208098 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 13:22:49.942063 1087820 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0414 13:22:49.942775 1087820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0414 13:22:50.030115 1087820 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:45 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-14 13:22:50.005242575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0414 13:22:50.030486 1087820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 13:22:50.030532 1087820 cni.go:84] Creating CNI manager for ""
I0414 13:22:50.030599 1087820 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 13:22:50.030665 1087820 start.go:340] cluster config:
{Name:old-k8s-version-208098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-208098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 13:22:50.035610 1087820 out.go:177] * Starting "old-k8s-version-208098" primary control-plane node in "old-k8s-version-208098" cluster
I0414 13:22:50.038920 1087820 cache.go:121] Beginning downloading kic base image for docker with containerd
I0414 13:22:50.042057 1087820 out.go:177] * Pulling base image v0.0.46-1744107393-20604 ...
I0414 13:22:50.045018 1087820 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0414 13:22:50.045089 1087820 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-872300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0414 13:22:50.045125 1087820 cache.go:56] Caching tarball of preloaded images
I0414 13:22:50.045137 1087820 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
I0414 13:22:50.045264 1087820 preload.go:172] Found /home/jenkins/minikube-integration/20384-872300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0414 13:22:50.045279 1087820 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0414 13:22:50.045402 1087820 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/config.json ...
I0414 13:22:50.073904 1087820 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon, skipping pull
I0414 13:22:50.073930 1087820 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in daemon, skipping load
I0414 13:22:50.073951 1087820 cache.go:230] Successfully downloaded all kic artifacts
I0414 13:22:50.073980 1087820 start.go:360] acquireMachinesLock for old-k8s-version-208098: {Name:mk563e42a42ff66b913b302f5b69f9d4816611ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 13:22:50.074048 1087820 start.go:364] duration metric: took 41.781µs to acquireMachinesLock for "old-k8s-version-208098"
I0414 13:22:50.074073 1087820 start.go:96] Skipping create...Using existing machine configuration
I0414 13:22:50.074078 1087820 fix.go:54] fixHost starting:
I0414 13:22:50.074637 1087820 cli_runner.go:164] Run: docker container inspect old-k8s-version-208098 --format={{.State.Status}}
I0414 13:22:50.094294 1087820 fix.go:112] recreateIfNeeded on old-k8s-version-208098: state=Stopped err=<nil>
W0414 13:22:50.094331 1087820 fix.go:138] unexpected machine state, will restart: <nil>
I0414 13:22:50.097682 1087820 out.go:177] * Restarting existing docker container for "old-k8s-version-208098" ...
I0414 13:22:50.100688 1087820 cli_runner.go:164] Run: docker start old-k8s-version-208098
I0414 13:22:50.392637 1087820 cli_runner.go:164] Run: docker container inspect old-k8s-version-208098 --format={{.State.Status}}
I0414 13:22:50.421028 1087820 kic.go:430] container "old-k8s-version-208098" state is running.
I0414 13:22:50.421424 1087820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-208098
I0414 13:22:50.443985 1087820 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/config.json ...
I0414 13:22:50.444218 1087820 machine.go:93] provisionDockerMachine start ...
I0414 13:22:50.444295 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:50.472298 1087820 main.go:141] libmachine: Using SSH client type: native
I0414 13:22:50.472618 1087820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34168 <nil> <nil>}
I0414 13:22:50.472628 1087820 main.go:141] libmachine: About to run SSH command:
hostname
I0414 13:22:50.473395 1087820 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41726->127.0.0.1:34168: read: connection reset by peer
I0414 13:22:53.597028 1087820 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-208098
I0414 13:22:53.597058 1087820 ubuntu.go:169] provisioning hostname "old-k8s-version-208098"
I0414 13:22:53.597121 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:53.615520 1087820 main.go:141] libmachine: Using SSH client type: native
I0414 13:22:53.615830 1087820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34168 <nil> <nil>}
I0414 13:22:53.615846 1087820 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-208098 && echo "old-k8s-version-208098" | sudo tee /etc/hostname
I0414 13:22:53.760043 1087820 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-208098
I0414 13:22:53.760156 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:53.779173 1087820 main.go:141] libmachine: Using SSH client type: native
I0414 13:22:53.779494 1087820 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34168 <nil> <nil>}
I0414 13:22:53.779516 1087820 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-208098' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-208098/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-208098' | sudo tee -a /etc/hosts;
fi
fi
I0414 13:22:53.909871 1087820 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 13:22:53.909900 1087820 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20384-872300/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-872300/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-872300/.minikube}
I0414 13:22:53.909929 1087820 ubuntu.go:177] setting up certificates
I0414 13:22:53.909938 1087820 provision.go:84] configureAuth start
I0414 13:22:53.910004 1087820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-208098
I0414 13:22:53.928233 1087820 provision.go:143] copyHostCerts
I0414 13:22:53.928304 1087820 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-872300/.minikube/cert.pem, removing ...
I0414 13:22:53.928327 1087820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-872300/.minikube/cert.pem
I0414 13:22:53.928449 1087820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-872300/.minikube/cert.pem (1123 bytes)
I0414 13:22:53.928560 1087820 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-872300/.minikube/key.pem, removing ...
I0414 13:22:53.928575 1087820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-872300/.minikube/key.pem
I0414 13:22:53.928607 1087820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-872300/.minikube/key.pem (1675 bytes)
I0414 13:22:53.928669 1087820 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-872300/.minikube/ca.pem, removing ...
I0414 13:22:53.928677 1087820 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-872300/.minikube/ca.pem
I0414 13:22:53.928703 1087820 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-872300/.minikube/ca.pem (1082 bytes)
I0414 13:22:53.928762 1087820 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-872300/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-208098 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-208098]
I0414 13:22:54.518681 1087820 provision.go:177] copyRemoteCerts
I0414 13:22:54.518762 1087820 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 13:22:54.518803 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:54.535799 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:54.627069 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0414 13:22:54.653211 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0414 13:22:54.678546 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 13:22:54.705014 1087820 provision.go:87] duration metric: took 795.057286ms to configureAuth
I0414 13:22:54.705041 1087820 ubuntu.go:193] setting minikube options for container-runtime
I0414 13:22:54.705250 1087820 config.go:182] Loaded profile config "old-k8s-version-208098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0414 13:22:54.705262 1087820 machine.go:96] duration metric: took 4.261029031s to provisionDockerMachine
I0414 13:22:54.705271 1087820 start.go:293] postStartSetup for "old-k8s-version-208098" (driver="docker")
I0414 13:22:54.705282 1087820 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 13:22:54.705347 1087820 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 13:22:54.705404 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:54.726187 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:54.823212 1087820 ssh_runner.go:195] Run: cat /etc/os-release
I0414 13:22:54.826607 1087820 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0414 13:22:54.826649 1087820 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0414 13:22:54.826660 1087820 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0414 13:22:54.826668 1087820 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0414 13:22:54.826681 1087820 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-872300/.minikube/addons for local assets ...
I0414 13:22:54.826743 1087820 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-872300/.minikube/files for local assets ...
I0414 13:22:54.826836 1087820 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-872300/.minikube/files/etc/ssl/certs/8777952.pem -> 8777952.pem in /etc/ssl/certs
I0414 13:22:54.826951 1087820 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 13:22:54.836058 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/files/etc/ssl/certs/8777952.pem --> /etc/ssl/certs/8777952.pem (1708 bytes)
I0414 13:22:54.861438 1087820 start.go:296] duration metric: took 156.1508ms for postStartSetup
I0414 13:22:54.861534 1087820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0414 13:22:54.861587 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:54.879624 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:54.968167 1087820 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0414 13:22:54.973335 1087820 fix.go:56] duration metric: took 4.899248319s for fixHost
I0414 13:22:54.973417 1087820 start.go:83] releasing machines lock for "old-k8s-version-208098", held for 4.899354413s
I0414 13:22:54.973518 1087820 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-208098
I0414 13:22:54.991467 1087820 ssh_runner.go:195] Run: cat /version.json
I0414 13:22:54.991527 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:54.991535 1087820 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 13:22:54.991593 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:55.016947 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:55.032962 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:55.113352 1087820 ssh_runner.go:195] Run: systemctl --version
I0414 13:22:55.294464 1087820 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0414 13:22:55.299050 1087820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0414 13:22:55.318167 1087820 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0414 13:22:55.318294 1087820 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 13:22:55.328079 1087820 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0414 13:22:55.328106 1087820 start.go:495] detecting cgroup driver to use...
I0414 13:22:55.328166 1087820 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0414 13:22:55.328241 1087820 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 13:22:55.342693 1087820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 13:22:55.355373 1087820 docker.go:217] disabling cri-docker service (if available) ...
I0414 13:22:55.355445 1087820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 13:22:55.369218 1087820 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 13:22:55.381337 1087820 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 13:22:55.484466 1087820 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 13:22:55.569696 1087820 docker.go:233] disabling docker service ...
I0414 13:22:55.569825 1087820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 13:22:55.582531 1087820 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 13:22:55.593884 1087820 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 13:22:55.678208 1087820 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 13:22:55.766664 1087820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 13:22:55.778518 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 13:22:55.795616 1087820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0414 13:22:55.806757 1087820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 13:22:55.817688 1087820 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 13:22:55.817813 1087820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 13:22:55.828076 1087820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 13:22:55.839059 1087820 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 13:22:55.848873 1087820 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 13:22:55.859154 1087820 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 13:22:55.870222 1087820 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 13:22:55.880786 1087820 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 13:22:55.890697 1087820 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 13:22:55.902641 1087820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 13:22:55.987371 1087820 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 13:22:56.191903 1087820 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 13:22:56.191984 1087820 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 13:22:56.196363 1087820 start.go:563] Will wait 60s for crictl version
I0414 13:22:56.196434 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:22:56.200658 1087820 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 13:22:56.245107 1087820 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0414 13:22:56.245190 1087820 ssh_runner.go:195] Run: containerd --version
I0414 13:22:56.273068 1087820 ssh_runner.go:195] Run: containerd --version
I0414 13:22:56.304816 1087820 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
I0414 13:22:56.307823 1087820 cli_runner.go:164] Run: docker network inspect old-k8s-version-208098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0414 13:22:56.329120 1087820 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0414 13:22:56.332868 1087820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 13:22:56.344160 1087820 kubeadm.go:883] updating cluster {Name:old-k8s-version-208098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-208098 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0414 13:22:56.344279 1087820 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0414 13:22:56.344343 1087820 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 13:22:56.390030 1087820 containerd.go:627] all images are preloaded for containerd runtime.
I0414 13:22:56.390055 1087820 containerd.go:534] Images already preloaded, skipping extraction
I0414 13:22:56.390123 1087820 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 13:22:56.429674 1087820 containerd.go:627] all images are preloaded for containerd runtime.
I0414 13:22:56.429701 1087820 cache_images.go:84] Images are preloaded, skipping loading
I0414 13:22:56.429711 1087820 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
I0414 13:22:56.429869 1087820 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-208098 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-208098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 13:22:56.429955 1087820 ssh_runner.go:195] Run: sudo crictl info
I0414 13:22:56.481695 1087820 cni.go:84] Creating CNI manager for ""
I0414 13:22:56.481725 1087820 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 13:22:56.481735 1087820 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0414 13:22:56.481755 1087820 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-208098 NodeName:old-k8s-version-208098 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0414 13:22:56.481891 1087820 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-208098"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0414 13:22:56.481970 1087820 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0414 13:22:56.494210 1087820 binaries.go:44] Found k8s binaries, skipping transfer
I0414 13:22:56.494283 1087820 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0414 13:22:56.503597 1087820 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0414 13:22:56.522978 1087820 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0414 13:22:56.541577 1087820 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0414 13:22:56.561989 1087820 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0414 13:22:56.565589 1087820 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 13:22:56.577502 1087820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 13:22:56.671659 1087820 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 13:22:56.686671 1087820 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098 for IP: 192.168.85.2
I0414 13:22:56.686695 1087820 certs.go:194] generating shared ca certs ...
I0414 13:22:56.686714 1087820 certs.go:226] acquiring lock for ca certs: {Name:mk6c53e70c2e2090a74ed171d7f164ad48f748f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:22:56.686943 1087820 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-872300/.minikube/ca.key
I0414 13:22:56.687011 1087820 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-872300/.minikube/proxy-client-ca.key
I0414 13:22:56.687024 1087820 certs.go:256] generating profile certs ...
I0414 13:22:56.687149 1087820 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/client.key
I0414 13:22:56.687273 1087820 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/apiserver.key.24830a57
I0414 13:22:56.687360 1087820 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/proxy-client.key
I0414 13:22:56.687537 1087820 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/877795.pem (1338 bytes)
W0414 13:22:56.687601 1087820 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-872300/.minikube/certs/877795_empty.pem, impossibly tiny 0 bytes
I0414 13:22:56.687619 1087820 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca-key.pem (1679 bytes)
I0414 13:22:56.687661 1087820 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem (1082 bytes)
I0414 13:22:56.687714 1087820 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/cert.pem (1123 bytes)
I0414 13:22:56.687767 1087820 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/key.pem (1675 bytes)
I0414 13:22:56.687850 1087820 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/files/etc/ssl/certs/8777952.pem (1708 bytes)
I0414 13:22:56.688551 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 13:22:56.728275 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 13:22:56.757799 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 13:22:56.790586 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 13:22:56.823970 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0414 13:22:56.849726 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0414 13:22:56.875519 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 13:22:56.902467 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/old-k8s-version-208098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0414 13:22:56.930938 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 13:22:56.957072 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/certs/877795.pem --> /usr/share/ca-certificates/877795.pem (1338 bytes)
I0414 13:22:56.986980 1087820 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/files/etc/ssl/certs/8777952.pem --> /usr/share/ca-certificates/8777952.pem (1708 bytes)
I0414 13:22:57.015739 1087820 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0414 13:22:57.036606 1087820 ssh_runner.go:195] Run: openssl version
I0414 13:22:57.047232 1087820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 13:22:57.058249 1087820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 13:22:57.062235 1087820 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:36 /usr/share/ca-certificates/minikubeCA.pem
I0414 13:22:57.062356 1087820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 13:22:57.070425 1087820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 13:22:57.079929 1087820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/877795.pem && ln -fs /usr/share/ca-certificates/877795.pem /etc/ssl/certs/877795.pem"
I0414 13:22:57.090102 1087820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/877795.pem
I0414 13:22:57.093766 1087820 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:44 /usr/share/ca-certificates/877795.pem
I0414 13:22:57.093841 1087820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/877795.pem
I0414 13:22:57.101054 1087820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/877795.pem /etc/ssl/certs/51391683.0"
I0414 13:22:57.110043 1087820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8777952.pem && ln -fs /usr/share/ca-certificates/8777952.pem /etc/ssl/certs/8777952.pem"
I0414 13:22:57.119812 1087820 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8777952.pem
I0414 13:22:57.123504 1087820 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:44 /usr/share/ca-certificates/8777952.pem
I0414 13:22:57.123596 1087820 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8777952.pem
I0414 13:22:57.130938 1087820 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8777952.pem /etc/ssl/certs/3ec20f2e.0"
I0414 13:22:57.140066 1087820 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 13:22:57.143902 1087820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0414 13:22:57.150917 1087820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0414 13:22:57.157928 1087820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0414 13:22:57.164937 1087820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0414 13:22:57.172553 1087820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0414 13:22:57.179551 1087820 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0414 13:22:57.186630 1087820 kubeadm.go:392] StartCluster: {Name:old-k8s-version-208098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-208098 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 13:22:57.186722 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0414 13:22:57.186780 1087820 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0414 13:22:57.233672 1087820 cri.go:89] found id: "69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652"
I0414 13:22:57.233700 1087820 cri.go:89] found id: "2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832"
I0414 13:22:57.233706 1087820 cri.go:89] found id: "f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6"
I0414 13:22:57.233710 1087820 cri.go:89] found id: "8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a"
I0414 13:22:57.233714 1087820 cri.go:89] found id: "7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717"
I0414 13:22:57.233718 1087820 cri.go:89] found id: "ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa"
I0414 13:22:57.233721 1087820 cri.go:89] found id: "5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87"
I0414 13:22:57.233725 1087820 cri.go:89] found id: "102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588"
I0414 13:22:57.233728 1087820 cri.go:89] found id: ""
I0414 13:22:57.233780 1087820 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0414 13:22:57.250793 1087820 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-04-14T13:22:57Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0414 13:22:57.250879 1087820 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0414 13:22:57.260126 1087820 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0414 13:22:57.260148 1087820 kubeadm.go:593] restartPrimaryControlPlane start ...
I0414 13:22:57.260224 1087820 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0414 13:22:57.269416 1087820 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0414 13:22:57.270074 1087820 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-208098" does not appear in /home/jenkins/minikube-integration/20384-872300/kubeconfig
I0414 13:22:57.270322 1087820 kubeconfig.go:62] /home/jenkins/minikube-integration/20384-872300/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-208098" cluster setting kubeconfig missing "old-k8s-version-208098" context setting]
I0414 13:22:57.270821 1087820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/kubeconfig: {Name:mk13ae85a2a9898fe90d877c6e0555eb91505972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:22:57.272154 1087820 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0414 13:22:57.282912 1087820 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I0414 13:22:57.282944 1087820 kubeadm.go:597] duration metric: took 22.789676ms to restartPrimaryControlPlane
I0414 13:22:57.282985 1087820 kubeadm.go:394] duration metric: took 96.367355ms to StartCluster
I0414 13:22:57.283007 1087820 settings.go:142] acquiring lock: {Name:mk4342d1d30492caa49997f191de6b9a857bfe8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:22:57.283090 1087820 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20384-872300/kubeconfig
I0414 13:22:57.283979 1087820 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/kubeconfig: {Name:mk13ae85a2a9898fe90d877c6e0555eb91505972 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:22:57.284231 1087820 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 13:22:57.284499 1087820 config.go:182] Loaded profile config "old-k8s-version-208098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0414 13:22:57.284550 1087820 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0414 13:22:57.284623 1087820 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-208098"
I0414 13:22:57.284654 1087820 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-208098"
W0414 13:22:57.284664 1087820 addons.go:247] addon storage-provisioner should already be in state true
I0414 13:22:57.284686 1087820 host.go:66] Checking if "old-k8s-version-208098" exists ...
I0414 13:22:57.285335 1087820 cli_runner.go:164] Run: docker container inspect old-k8s-version-208098 --format={{.State.Status}}
I0414 13:22:57.285688 1087820 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-208098"
I0414 13:22:57.285732 1087820 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-208098"
I0414 13:22:57.286283 1087820 cli_runner.go:164] Run: docker container inspect old-k8s-version-208098 --format={{.State.Status}}
I0414 13:22:57.287080 1087820 addons.go:69] Setting dashboard=true in profile "old-k8s-version-208098"
I0414 13:22:57.287102 1087820 addons.go:238] Setting addon dashboard=true in "old-k8s-version-208098"
W0414 13:22:57.287109 1087820 addons.go:247] addon dashboard should already be in state true
I0414 13:22:57.287146 1087820 host.go:66] Checking if "old-k8s-version-208098" exists ...
I0414 13:22:57.287654 1087820 cli_runner.go:164] Run: docker container inspect old-k8s-version-208098 --format={{.State.Status}}
I0414 13:22:57.288124 1087820 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-208098"
I0414 13:22:57.288173 1087820 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-208098"
W0414 13:22:57.288215 1087820 addons.go:247] addon metrics-server should already be in state true
I0414 13:22:57.288302 1087820 host.go:66] Checking if "old-k8s-version-208098" exists ...
I0414 13:22:57.289263 1087820 out.go:177] * Verifying Kubernetes components...
I0414 13:22:57.290105 1087820 cli_runner.go:164] Run: docker container inspect old-k8s-version-208098 --format={{.State.Status}}
I0414 13:22:57.291791 1087820 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 13:22:57.321742 1087820 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0414 13:22:57.327592 1087820 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0414 13:22:57.327626 1087820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0414 13:22:57.327691 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:57.346127 1087820 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0414 13:22:57.347870 1087820 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0414 13:22:57.347894 1087820 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0414 13:22:57.347965 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:57.359188 1087820 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0414 13:22:57.364484 1087820 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0414 13:22:57.365973 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0414 13:22:57.365995 1087820 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0414 13:22:57.366072 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:57.378043 1087820 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-208098"
W0414 13:22:57.378073 1087820 addons.go:247] addon default-storageclass should already be in state true
I0414 13:22:57.378098 1087820 host.go:66] Checking if "old-k8s-version-208098" exists ...
I0414 13:22:57.378517 1087820 cli_runner.go:164] Run: docker container inspect old-k8s-version-208098 --format={{.State.Status}}
I0414 13:22:57.405728 1087820 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0414 13:22:57.405753 1087820 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0414 13:22:57.405819 1087820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-208098
I0414 13:22:57.409290 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:57.428932 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:57.437912 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:57.460014 1087820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34168 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/old-k8s-version-208098/id_rsa Username:docker}
I0414 13:22:57.473554 1087820 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 13:22:57.506158 1087820 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-208098" to be "Ready" ...
I0414 13:22:57.552305 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 13:22:57.612469 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0414 13:22:57.620052 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0414 13:22:57.620118 1087820 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0414 13:22:57.648892 1087820 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0414 13:22:57.648961 1087820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0414 13:22:57.679035 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0414 13:22:57.679102 1087820 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
W0414 13:22:57.694789 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:57.694934 1087820 retry.go:31] will retry after 181.799749ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:57.697595 1087820 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0414 13:22:57.697676 1087820 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0414 13:22:57.736290 1087820 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0414 13:22:57.736317 1087820 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0414 13:22:57.742261 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0414 13:22:57.742289 1087820 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
W0414 13:22:57.773391 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:57.773422 1087820 retry.go:31] will retry after 205.298492ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:57.775564 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 13:22:57.778883 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0414 13:22:57.778910 1087820 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0414 13:22:57.799669 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0414 13:22:57.799695 1087820 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0414 13:22:57.819931 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0414 13:22:57.819954 1087820 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0414 13:22:57.840890 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0414 13:22:57.840913 1087820 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0414 13:22:57.862495 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0414 13:22:57.862518 1087820 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0414 13:22:57.877692 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 13:22:57.882616 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:57.882648 1087820 retry.go:31] will retry after 325.24782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:57.884330 1087820 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0414 13:22:57.884353 1087820 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0414 13:22:57.910898 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0414 13:22:57.979201 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 13:22:58.015457 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.015491 1087820 retry.go:31] will retry after 296.346612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 13:22:58.015568 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.015599 1087820 retry.go:31] will retry after 264.764325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 13:22:58.084668 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.084699 1087820 retry.go:31] will retry after 332.769222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.209015 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 13:22:58.280589 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 13:22:58.290865 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.290896 1087820 retry.go:31] will retry after 202.99284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.312060 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 13:22:58.379941 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.379972 1087820 retry.go:31] will retry after 485.075398ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 13:22:58.411567 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.411657 1087820 retry.go:31] will retry after 476.184647ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.417759 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0414 13:22:58.494074 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0414 13:22:58.504534 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.504628 1087820 retry.go:31] will retry after 731.739446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 13:22:58.584293 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.584326 1087820 retry.go:31] will retry after 309.347239ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:58.865780 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0414 13:22:58.888226 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 13:22:58.894516 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0414 13:22:59.008261 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.008296 1087820 retry.go:31] will retry after 421.193362ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 13:22:59.026330 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.026424 1087820 retry.go:31] will retry after 953.132341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 13:22:59.040534 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.040570 1087820 retry.go:31] will retry after 613.456963ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.236972 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 13:22:59.315947 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.315995 1087820 retry.go:31] will retry after 1.232261625s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.430234 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0414 13:22:59.506942 1087820 node_ready.go:53] error getting node "old-k8s-version-208098": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-208098": dial tcp 192.168.85.2:8443: connect: connection refused
W0414 13:22:59.511191 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.511225 1087820 retry.go:31] will retry after 859.144541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.654503 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0414 13:22:59.738543 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.738627 1087820 retry.go:31] will retry after 1.005081484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:22:59.979995 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 13:23:00.154238 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:00.154371 1087820 retry.go:31] will retry after 870.47545ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:00.370641 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 13:23:00.463310 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:00.463366 1087820 retry.go:31] will retry after 663.638029ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:00.549002 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 13:23:00.631225 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:00.631260 1087820 retry.go:31] will retry after 1.051842181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:00.744539 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0414 13:23:00.819395 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:00.819434 1087820 retry.go:31] will retry after 1.685428891s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:01.025472 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 13:23:01.105392 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:01.105819 1087820 retry.go:31] will retry after 1.495458845s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:01.127565 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 13:23:01.208223 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:01.208259 1087820 retry.go:31] will retry after 1.365529781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:01.683391 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 13:23:01.793167 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:01.793203 1087820 retry.go:31] will retry after 1.489478163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:02.006942 1087820 node_ready.go:53] error getting node "old-k8s-version-208098": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-208098": dial tcp 192.168.85.2:8443: connect: connection refused
I0414 13:23:02.505764 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 13:23:02.574608 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 13:23:02.592781 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:02.592816 1087820 retry.go:31] will retry after 3.958121138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:02.602125 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 13:23:02.682838 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:02.682881 1087820 retry.go:31] will retry after 1.46020826s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 13:23:02.701494 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:02.701529 1087820 retry.go:31] will retry after 3.336332648s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:03.283727 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 13:23:03.371935 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:03.371972 1087820 retry.go:31] will retry after 1.711082396s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:04.007612 1087820 node_ready.go:53] error getting node "old-k8s-version-208098": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-208098": dial tcp 192.168.85.2:8443: connect: connection refused
I0414 13:23:04.143968 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 13:23:04.225239 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:04.225273 1087820 retry.go:31] will retry after 4.692659853s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:05.083692 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 13:23:05.233069 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:05.233098 1087820 retry.go:31] will retry after 3.445504513s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:06.038686 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 13:23:06.220820 1087820 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:06.220854 1087820 retry.go:31] will retry after 2.823824921s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 13:23:06.551838 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 13:23:08.679332 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0414 13:23:08.918909 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0414 13:23:09.045480 1087820 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 13:23:14.157412 1087820 node_ready.go:49] node "old-k8s-version-208098" has status "Ready":"True"
I0414 13:23:14.157440 1087820 node_ready.go:38] duration metric: took 16.651196973s for node "old-k8s-version-208098" to be "Ready" ...
I0414 13:23:14.157451 1087820 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0414 13:23:14.542188 1087820 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-5r22q" in "kube-system" namespace to be "Ready" ...
I0414 13:23:14.574866 1087820 pod_ready.go:93] pod "coredns-74ff55c5b-5r22q" in "kube-system" namespace has status "Ready":"True"
I0414 13:23:14.574892 1087820 pod_ready.go:82] duration metric: took 32.667156ms for pod "coredns-74ff55c5b-5r22q" in "kube-system" namespace to be "Ready" ...
I0414 13:23:14.574903 1087820 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-208098" in "kube-system" namespace to be "Ready" ...
I0414 13:23:14.696609 1087820 pod_ready.go:93] pod "etcd-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"True"
I0414 13:23:14.696636 1087820 pod_ready.go:82] duration metric: took 121.726198ms for pod "etcd-old-k8s-version-208098" in "kube-system" namespace to be "Ready" ...
I0414 13:23:14.696650 1087820 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-208098" in "kube-system" namespace to be "Ready" ...
I0414 13:23:16.689396 1087820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.137506993s)
I0414 13:23:16.689451 1087820 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-208098"
I0414 13:23:16.689481 1087820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.010125328s)
I0414 13:23:16.745772 1087820 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:17.308483 1087820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.38952125s)
I0414 13:23:17.308706 1087820 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.263199693s)
I0414 13:23:17.312072 1087820 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-208098 addons enable metrics-server
I0414 13:23:17.315126 1087820 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
I0414 13:23:17.318245 1087820 addons.go:514] duration metric: took 20.033689132s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
I0414 13:23:19.206508 1087820 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:21.701944 1087820 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:23.702329 1087820 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:25.702469 1087820 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:26.702042 1087820 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"True"
I0414 13:23:26.702070 1087820 pod_ready.go:82] duration metric: took 12.005410987s for pod "kube-apiserver-old-k8s-version-208098" in "kube-system" namespace to be "Ready" ...
I0414 13:23:26.702084 1087820 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace to be "Ready" ...
I0414 13:23:28.708695 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:31.208575 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:33.238642 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:35.718174 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:38.210122 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:40.214907 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:42.708489 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:44.711186 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:46.758391 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:49.208214 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:51.209261 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:53.212741 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:55.709829 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:23:58.208077 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:00.267541 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:02.707849 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:05.207886 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:07.707571 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:10.208767 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:12.709179 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:15.208271 1087820 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:16.707795 1087820 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"True"
I0414 13:24:16.707820 1087820 pod_ready.go:82] duration metric: took 50.005728378s for pod "kube-controller-manager-old-k8s-version-208098" in "kube-system" namespace to be "Ready" ...
I0414 13:24:16.707852 1087820 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-25hcq" in "kube-system" namespace to be "Ready" ...
I0414 13:24:16.712028 1087820 pod_ready.go:93] pod "kube-proxy-25hcq" in "kube-system" namespace has status "Ready":"True"
I0414 13:24:16.712056 1087820 pod_ready.go:82] duration metric: took 4.180766ms for pod "kube-proxy-25hcq" in "kube-system" namespace to be "Ready" ...
I0414 13:24:16.712067 1087820 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace to be "Ready" ...
I0414 13:24:18.718410 1087820 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:21.217683 1087820 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:23.217847 1087820 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:25.218006 1087820 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:27.718486 1087820 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:30.218766 1087820 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:32.717535 1087820 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:34.717458 1087820 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace has status "Ready":"True"
I0414 13:24:34.717486 1087820 pod_ready.go:82] duration metric: took 18.005408916s for pod "kube-scheduler-old-k8s-version-208098" in "kube-system" namespace to be "Ready" ...
I0414 13:24:34.717499 1087820 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace to be "Ready" ...
I0414 13:24:36.723875 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:39.222945 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:41.223452 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:43.223681 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:45.225413 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:47.723480 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:50.224474 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:52.722820 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:54.798606 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:57.230143 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:24:59.723076 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:01.723888 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:04.223713 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:06.723393 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:09.223464 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:11.722671 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:13.724025 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:16.223832 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:18.723871 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:21.222773 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:23.223796 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:25.722899 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:27.723943 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:30.223834 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:32.223998 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:34.722668 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:36.723017 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:38.723148 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:41.229215 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:43.723264 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:45.724293 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:48.223439 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:50.223538 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:52.724243 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:55.321996 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:57.722660 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:25:59.723647 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:01.724550 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:04.223420 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:06.723644 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:08.729331 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:11.223326 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:13.223830 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:15.722959 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:17.723756 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:20.223529 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:22.723176 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:24.723629 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:26.724408 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:29.223313 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:31.223526 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:33.722557 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:35.723164 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:37.736071 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:40.222287 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:42.225012 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:44.723348 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:46.723705 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:49.223371 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:51.224065 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:53.226939 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:55.723637 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:26:58.223857 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:00.255927 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:02.722863 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:04.723067 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:06.723378 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:09.223584 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:11.722997 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:14.223418 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:16.225100 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:18.723205 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:21.223564 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:23.723646 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:26.222944 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:28.223555 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:30.229465 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:32.723476 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:34.723726 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:36.723773 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:39.222948 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:41.224274 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:43.224891 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:45.234066 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:47.724527 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:50.223498 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:52.723352 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:55.222745 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:57.227392 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:27:59.725273 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:02.224344 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:04.723434 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:07.223274 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:09.723766 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:12.223856 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:14.723732 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:17.223080 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:19.223569 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:21.728582 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:24.222928 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:26.223376 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:28.224628 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:30.722799 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:33.223180 1087820 pod_ready.go:103] pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace has status "Ready":"False"
I0414 13:28:34.726368 1087820 pod_ready.go:82] duration metric: took 4m0.008855487s for pod "metrics-server-9975d5f86-6zb8s" in "kube-system" namespace to be "Ready" ...
E0414 13:28:34.726391 1087820 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0414 13:28:34.726400 1087820 pod_ready.go:39] duration metric: took 5m20.568936579s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0414 13:28:34.726414 1087820 api_server.go:52] waiting for apiserver process to appear ...
I0414 13:28:34.726444 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0414 13:28:34.726503 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0414 13:28:34.790215 1087820 cri.go:89] found id: "b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e"
I0414 13:28:34.790239 1087820 cri.go:89] found id: "7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717"
I0414 13:28:34.790244 1087820 cri.go:89] found id: ""
I0414 13:28:34.790251 1087820 logs.go:282] 2 containers: [b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e 7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717]
I0414 13:28:34.790311 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:34.794561 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:34.801937 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0414 13:28:34.802018 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0414 13:28:34.889747 1087820 cri.go:89] found id: "ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa"
I0414 13:28:34.889770 1087820 cri.go:89] found id: "5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87"
I0414 13:28:34.889776 1087820 cri.go:89] found id: ""
I0414 13:28:34.889783 1087820 logs.go:282] 2 containers: [ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa 5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87]
I0414 13:28:34.889844 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:34.894178 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:34.898016 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0414 13:28:34.898095 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0414 13:28:34.949265 1087820 cri.go:89] found id: "5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2"
I0414 13:28:34.949289 1087820 cri.go:89] found id: "69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652"
I0414 13:28:34.949294 1087820 cri.go:89] found id: ""
I0414 13:28:34.949302 1087820 logs.go:282] 2 containers: [5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2 69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652]
I0414 13:28:34.949365 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:34.953530 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:34.957425 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0414 13:28:34.957528 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0414 13:28:34.998490 1087820 cri.go:89] found id: "7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f"
I0414 13:28:34.998520 1087820 cri.go:89] found id: "102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588"
I0414 13:28:34.998525 1087820 cri.go:89] found id: ""
I0414 13:28:34.998533 1087820 logs.go:282] 2 containers: [7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f 102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588]
I0414 13:28:34.998594 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.003098 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.008409 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0414 13:28:35.008514 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0414 13:28:35.082766 1087820 cri.go:89] found id: "309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f"
I0414 13:28:35.082834 1087820 cri.go:89] found id: "8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a"
I0414 13:28:35.082856 1087820 cri.go:89] found id: ""
I0414 13:28:35.082884 1087820 logs.go:282] 2 containers: [309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f 8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a]
I0414 13:28:35.082985 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.087637 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.092359 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0414 13:28:35.092494 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0414 13:28:35.146597 1087820 cri.go:89] found id: "a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150"
I0414 13:28:35.146620 1087820 cri.go:89] found id: "ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa"
I0414 13:28:35.146626 1087820 cri.go:89] found id: ""
I0414 13:28:35.146633 1087820 logs.go:282] 2 containers: [a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150 ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa]
I0414 13:28:35.146719 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.151444 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.156018 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0414 13:28:35.156137 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0414 13:28:35.211215 1087820 cri.go:89] found id: "ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110"
I0414 13:28:35.211238 1087820 cri.go:89] found id: "2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832"
I0414 13:28:35.211243 1087820 cri.go:89] found id: ""
I0414 13:28:35.211250 1087820 logs.go:282] 2 containers: [ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110 2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832]
I0414 13:28:35.211332 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.216369 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.221908 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0414 13:28:35.222036 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0414 13:28:35.294545 1087820 cri.go:89] found id: "550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c"
I0414 13:28:35.294566 1087820 cri.go:89] found id: ""
I0414 13:28:35.294575 1087820 logs.go:282] 1 containers: [550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c]
I0414 13:28:35.294656 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.299141 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0414 13:28:35.299259 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0414 13:28:35.361689 1087820 cri.go:89] found id: "56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c"
I0414 13:28:35.361711 1087820 cri.go:89] found id: "f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6"
I0414 13:28:35.361724 1087820 cri.go:89] found id: ""
I0414 13:28:35.361760 1087820 logs.go:282] 2 containers: [56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6]
I0414 13:28:35.361841 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.367705 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:35.371904 1087820 logs.go:123] Gathering logs for kube-proxy [8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a] ...
I0414 13:28:35.371925 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a"
I0414 13:28:35.429452 1087820 logs.go:123] Gathering logs for kube-controller-manager [a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150] ...
I0414 13:28:35.429514 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150"
I0414 13:28:35.516056 1087820 logs.go:123] Gathering logs for kindnet [ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110] ...
I0414 13:28:35.516099 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110"
I0414 13:28:35.576141 1087820 logs.go:123] Gathering logs for kubernetes-dashboard [550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c] ...
I0414 13:28:35.576511 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c"
I0414 13:28:35.644118 1087820 logs.go:123] Gathering logs for kube-proxy [309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f] ...
I0414 13:28:35.644145 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f"
I0414 13:28:35.685367 1087820 logs.go:123] Gathering logs for kubelet ...
I0414 13:28:35.685441 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0414 13:28:35.748102 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.182510 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-wwbtf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-wwbtf" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:35.748605 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234087 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:35.748932 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234165 662 reflector.go:138] object-"kube-system"/"kindnet-token-nhv25": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nhv25" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:35.749223 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234223 662 reflector.go:138] object-"kube-system"/"coredns-token-hhfd8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-hhfd8" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:35.749532 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234276 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:35.750348 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234329 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-9w6cs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-9w6cs" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:35.757407 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.388675 662 reflector.go:138] object-"default"/"default-token-lkd8z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lkd8z" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:35.760612 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.388739 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fw4rr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fw4rr" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:35.774694 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:17 old-k8s-version-208098 kubelet[662]: E0414 13:23:17.967296 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:35.774910 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:18 old-k8s-version-208098 kubelet[662]: E0414 13:23:18.520093 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.780155 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:33 old-k8s-version-208098 kubelet[662]: E0414 13:23:33.291389 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:35.782763 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:44 old-k8s-version-208098 kubelet[662]: E0414 13:23:44.293462 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.786631 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:47 old-k8s-version-208098 kubelet[662]: E0414 13:23:47.683929 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.787169 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:48 old-k8s-version-208098 kubelet[662]: E0414 13:23:48.688109 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.787515 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:49 old-k8s-version-208098 kubelet[662]: E0414 13:23:49.691043 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.791344 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:56 old-k8s-version-208098 kubelet[662]: E0414 13:23:56.291650 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:35.791958 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:04 old-k8s-version-208098 kubelet[662]: E0414 13:24:04.746285 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.792289 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:07 old-k8s-version-208098 kubelet[662]: E0414 13:24:07.745784 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.792475 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:09 old-k8s-version-208098 kubelet[662]: E0414 13:24:09.281525 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.792660 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:21 old-k8s-version-208098 kubelet[662]: E0414 13:24:21.285170 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.793071 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:22 old-k8s-version-208098 kubelet[662]: E0414 13:24:22.280812 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.793744 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:34 old-k8s-version-208098 kubelet[662]: E0414 13:24:34.281206 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.795203 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:35 old-k8s-version-208098 kubelet[662]: E0414 13:24:35.832817 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.795585 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:37 old-k8s-version-208098 kubelet[662]: E0414 13:24:37.746200 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.801064 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:47 old-k8s-version-208098 kubelet[662]: E0414 13:24:47.292155 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:35.801411 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:53 old-k8s-version-208098 kubelet[662]: E0414 13:24:53.281914 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.801660 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:59 old-k8s-version-208098 kubelet[662]: E0414 13:24:59.281828 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.802174 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:05 old-k8s-version-208098 kubelet[662]: E0414 13:25:05.281366 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.802364 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:10 old-k8s-version-208098 kubelet[662]: E0414 13:25:10.281442 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.803039 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:17 old-k8s-version-208098 kubelet[662]: E0414 13:25:17.957823 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.803237 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:24 old-k8s-version-208098 kubelet[662]: E0414 13:25:24.281394 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.803566 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:27 old-k8s-version-208098 kubelet[662]: E0414 13:25:27.745793 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.803752 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:36 old-k8s-version-208098 kubelet[662]: E0414 13:25:36.281304 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.804220 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:39 old-k8s-version-208098 kubelet[662]: E0414 13:25:39.280829 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.804437 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:49 old-k8s-version-208098 kubelet[662]: E0414 13:25:49.285311 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.804804 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:54 old-k8s-version-208098 kubelet[662]: E0414 13:25:54.280872 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.805005 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:03 old-k8s-version-208098 kubelet[662]: E0414 13:26:03.281419 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.805691 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:09 old-k8s-version-208098 kubelet[662]: E0414 13:26:09.284722 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.815093 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:15 old-k8s-version-208098 kubelet[662]: E0414 13:26:15.299114 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:35.816766 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:24 old-k8s-version-208098 kubelet[662]: E0414 13:26:24.280839 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.817006 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:29 old-k8s-version-208098 kubelet[662]: E0414 13:26:29.281837 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.817653 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:40 old-k8s-version-208098 kubelet[662]: E0414 13:26:40.191923 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.817872 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:43 old-k8s-version-208098 kubelet[662]: E0414 13:26:43.281861 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.819050 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:47 old-k8s-version-208098 kubelet[662]: E0414 13:26:47.746404 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.819308 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:54 old-k8s-version-208098 kubelet[662]: E0414 13:26:54.281826 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.819681 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:02 old-k8s-version-208098 kubelet[662]: E0414 13:27:02.280879 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.819896 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:09 old-k8s-version-208098 kubelet[662]: E0414 13:27:09.285633 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.820248 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:15 old-k8s-version-208098 kubelet[662]: E0414 13:27:15.281925 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.820458 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:23 old-k8s-version-208098 kubelet[662]: E0414 13:27:23.281150 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.820809 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:28 old-k8s-version-208098 kubelet[662]: E0414 13:27:28.280821 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.821345 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:38 old-k8s-version-208098 kubelet[662]: E0414 13:27:38.281280 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.821743 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:42 old-k8s-version-208098 kubelet[662]: E0414 13:27:42.281001 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.822210 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:49 old-k8s-version-208098 kubelet[662]: E0414 13:27:49.281897 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.822847 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:54 old-k8s-version-208098 kubelet[662]: E0414 13:27:54.280822 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.823099 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:04 old-k8s-version-208098 kubelet[662]: E0414 13:28:04.281150 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.823948 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:07 old-k8s-version-208098 kubelet[662]: E0414 13:28:07.281556 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.824200 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:15 old-k8s-version-208098 kubelet[662]: E0414 13:28:15.282302 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.824703 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:35.824931 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:35.825291 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
I0414 13:28:35.825327 1087820 logs.go:123] Gathering logs for describe nodes ...
I0414 13:28:35.825352 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0414 13:28:36.040931 1087820 logs.go:123] Gathering logs for coredns [69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652] ...
I0414 13:28:36.041018 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652"
I0414 13:28:36.092763 1087820 logs.go:123] Gathering logs for kube-scheduler [102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588] ...
I0414 13:28:36.092838 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588"
I0414 13:28:36.160000 1087820 logs.go:123] Gathering logs for kube-controller-manager [ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa] ...
I0414 13:28:36.160080 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa"
I0414 13:28:36.248440 1087820 logs.go:123] Gathering logs for containerd ...
I0414 13:28:36.248530 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0414 13:28:36.318024 1087820 logs.go:123] Gathering logs for etcd [ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa] ...
I0414 13:28:36.318063 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa"
I0414 13:28:36.377716 1087820 logs.go:123] Gathering logs for storage-provisioner [56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c] ...
I0414 13:28:36.377751 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c"
I0414 13:28:36.431731 1087820 logs.go:123] Gathering logs for container status ...
I0414 13:28:36.431760 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0414 13:28:36.489549 1087820 logs.go:123] Gathering logs for kube-apiserver [b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e] ...
I0414 13:28:36.489581 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e"
I0414 13:28:36.573244 1087820 logs.go:123] Gathering logs for etcd [5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87] ...
I0414 13:28:36.573318 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87"
I0414 13:28:36.638155 1087820 logs.go:123] Gathering logs for coredns [5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2] ...
I0414 13:28:36.638296 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2"
I0414 13:28:36.714109 1087820 logs.go:123] Gathering logs for kube-scheduler [7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f] ...
I0414 13:28:36.714193 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f"
I0414 13:28:36.788218 1087820 logs.go:123] Gathering logs for kindnet [2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832] ...
I0414 13:28:36.788253 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832"
I0414 13:28:36.905959 1087820 logs.go:123] Gathering logs for storage-provisioner [f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6] ...
I0414 13:28:36.906042 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6"
I0414 13:28:37.011097 1087820 logs.go:123] Gathering logs for dmesg ...
I0414 13:28:37.011130 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0414 13:28:37.035131 1087820 logs.go:123] Gathering logs for kube-apiserver [7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717] ...
I0414 13:28:37.035161 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717"
I0414 13:28:37.139116 1087820 out.go:358] Setting ErrFile to fd 2...
I0414 13:28:37.139191 1087820 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0414 13:28:37.139279 1087820 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0414 13:28:37.139322 1087820 out.go:270] Apr 14 13:28:07 old-k8s-version-208098 kubelet[662]: E0414 13:28:07.281556 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:07 old-k8s-version-208098 kubelet[662]: E0414 13:28:07.281556 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:37.139361 1087820 out.go:270] Apr 14 13:28:15 old-k8s-version-208098 kubelet[662]: E0414 13:28:15.282302 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:15 old-k8s-version-208098 kubelet[662]: E0414 13:28:15.282302 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:37.139410 1087820 out.go:270] Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:37.139450 1087820 out.go:270] Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:37.139496 1087820 out.go:270] Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
I0414 13:28:37.139538 1087820 out.go:358] Setting ErrFile to fd 2...
I0414 13:28:37.139558 1087820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:28:47.141862 1087820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0414 13:28:47.154963 1087820 api_server.go:72] duration metric: took 5m49.870694905s to wait for apiserver process to appear ...
I0414 13:28:47.154990 1087820 api_server.go:88] waiting for apiserver healthz status ...
I0414 13:28:47.155026 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0414 13:28:47.155083 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0414 13:28:47.195449 1087820 cri.go:89] found id: "b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e"
I0414 13:28:47.195470 1087820 cri.go:89] found id: "7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717"
I0414 13:28:47.195475 1087820 cri.go:89] found id: ""
I0414 13:28:47.195483 1087820 logs.go:282] 2 containers: [b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e 7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717]
I0414 13:28:47.195541 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.199684 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.203283 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0414 13:28:47.203356 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0414 13:28:47.246189 1087820 cri.go:89] found id: "ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa"
I0414 13:28:47.246212 1087820 cri.go:89] found id: "5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87"
I0414 13:28:47.246217 1087820 cri.go:89] found id: ""
I0414 13:28:47.246224 1087820 logs.go:282] 2 containers: [ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa 5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87]
I0414 13:28:47.246279 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.250403 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.254165 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0414 13:28:47.254247 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0414 13:28:47.301169 1087820 cri.go:89] found id: "5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2"
I0414 13:28:47.301193 1087820 cri.go:89] found id: "69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652"
I0414 13:28:47.301198 1087820 cri.go:89] found id: ""
I0414 13:28:47.301205 1087820 logs.go:282] 2 containers: [5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2 69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652]
I0414 13:28:47.301264 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.305335 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.308966 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0414 13:28:47.309073 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0414 13:28:47.355346 1087820 cri.go:89] found id: "7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f"
I0414 13:28:47.355414 1087820 cri.go:89] found id: "102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588"
I0414 13:28:47.355434 1087820 cri.go:89] found id: ""
I0414 13:28:47.355449 1087820 logs.go:282] 2 containers: [7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f 102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588]
I0414 13:28:47.355544 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.359479 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.363132 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0414 13:28:47.363211 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0414 13:28:47.430687 1087820 cri.go:89] found id: "309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f"
I0414 13:28:47.430712 1087820 cri.go:89] found id: "8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a"
I0414 13:28:47.430719 1087820 cri.go:89] found id: ""
I0414 13:28:47.430727 1087820 logs.go:282] 2 containers: [309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f 8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a]
I0414 13:28:47.430786 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.435414 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.442224 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0414 13:28:47.442318 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0414 13:28:47.507894 1087820 cri.go:89] found id: "a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150"
I0414 13:28:47.507919 1087820 cri.go:89] found id: "ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa"
I0414 13:28:47.507924 1087820 cri.go:89] found id: ""
I0414 13:28:47.507931 1087820 logs.go:282] 2 containers: [a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150 ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa]
I0414 13:28:47.508012 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.511936 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.515642 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0414 13:28:47.515712 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0414 13:28:47.598489 1087820 cri.go:89] found id: "ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110"
I0414 13:28:47.598513 1087820 cri.go:89] found id: "2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832"
I0414 13:28:47.598518 1087820 cri.go:89] found id: ""
I0414 13:28:47.598526 1087820 logs.go:282] 2 containers: [ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110 2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832]
I0414 13:28:47.598581 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.602836 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.607458 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0414 13:28:47.607527 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0414 13:28:47.685395 1087820 cri.go:89] found id: "550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c"
I0414 13:28:47.685415 1087820 cri.go:89] found id: ""
I0414 13:28:47.685424 1087820 logs.go:282] 1 containers: [550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c]
I0414 13:28:47.685511 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.690779 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0414 13:28:47.690852 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0414 13:28:47.748780 1087820 cri.go:89] found id: "56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c"
I0414 13:28:47.748806 1087820 cri.go:89] found id: "f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6"
I0414 13:28:47.748811 1087820 cri.go:89] found id: ""
I0414 13:28:47.748818 1087820 logs.go:282] 2 containers: [56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6]
I0414 13:28:47.748873 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.752966 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.756860 1087820 logs.go:123] Gathering logs for kube-scheduler [7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f] ...
I0414 13:28:47.756925 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f"
I0414 13:28:47.811393 1087820 logs.go:123] Gathering logs for kube-proxy [8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a] ...
I0414 13:28:47.811424 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a"
I0414 13:28:47.929548 1087820 logs.go:123] Gathering logs for kube-controller-manager [a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150] ...
I0414 13:28:47.929579 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150"
I0414 13:28:48.292975 1087820 logs.go:123] Gathering logs for kube-controller-manager [ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa] ...
I0414 13:28:48.293019 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa"
I0414 13:28:48.416421 1087820 logs.go:123] Gathering logs for kubernetes-dashboard [550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c] ...
I0414 13:28:48.416455 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c"
I0414 13:28:48.542826 1087820 logs.go:123] Gathering logs for containerd ...
I0414 13:28:48.542864 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0414 13:28:48.690583 1087820 logs.go:123] Gathering logs for storage-provisioner [f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6] ...
I0414 13:28:48.690668 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6"
I0414 13:28:48.799562 1087820 logs.go:123] Gathering logs for describe nodes ...
I0414 13:28:48.799592 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0414 13:28:49.020857 1087820 logs.go:123] Gathering logs for kube-apiserver [b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e] ...
I0414 13:28:49.020889 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e"
I0414 13:28:49.170699 1087820 logs.go:123] Gathering logs for kube-apiserver [7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717] ...
I0414 13:28:49.170739 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717"
I0414 13:28:49.247835 1087820 logs.go:123] Gathering logs for etcd [ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa] ...
I0414 13:28:49.247872 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa"
I0414 13:28:49.305040 1087820 logs.go:123] Gathering logs for kindnet [ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110] ...
I0414 13:28:49.305117 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110"
I0414 13:28:49.355632 1087820 logs.go:123] Gathering logs for kindnet [2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832] ...
I0414 13:28:49.355664 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832"
I0414 13:28:49.399256 1087820 logs.go:123] Gathering logs for container status ...
I0414 13:28:49.399284 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0414 13:28:49.448958 1087820 logs.go:123] Gathering logs for dmesg ...
I0414 13:28:49.448991 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0414 13:28:49.468450 1087820 logs.go:123] Gathering logs for etcd [5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87] ...
I0414 13:28:49.468522 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87"
I0414 13:28:49.510148 1087820 logs.go:123] Gathering logs for coredns [69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652] ...
I0414 13:28:49.510177 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652"
I0414 13:28:49.557180 1087820 logs.go:123] Gathering logs for kube-scheduler [102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588] ...
I0414 13:28:49.557258 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588"
I0414 13:28:49.618798 1087820 logs.go:123] Gathering logs for kube-proxy [309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f] ...
I0414 13:28:49.618830 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f"
I0414 13:28:49.676689 1087820 logs.go:123] Gathering logs for storage-provisioner [56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c] ...
I0414 13:28:49.676720 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c"
I0414 13:28:49.718944 1087820 logs.go:123] Gathering logs for kubelet ...
I0414 13:28:49.718972 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0414 13:28:49.771258 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.182510 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-wwbtf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-wwbtf" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.771714 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234087 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.771963 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234165 662 reflector.go:138] object-"kube-system"/"kindnet-token-nhv25": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nhv25" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.772203 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234223 662 reflector.go:138] object-"kube-system"/"coredns-token-hhfd8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-hhfd8" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.772473 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234276 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.772748 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234329 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-9w6cs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-9w6cs" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.777845 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.388675 662 reflector.go:138] object-"default"/"default-token-lkd8z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lkd8z" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.778124 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.388739 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fw4rr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fw4rr" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.787529 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:17 old-k8s-version-208098 kubelet[662]: E0414 13:23:17.967296 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.787837 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:18 old-k8s-version-208098 kubelet[662]: E0414 13:23:18.520093 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.792155 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:33 old-k8s-version-208098 kubelet[662]: E0414 13:23:33.291389 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.792816 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:44 old-k8s-version-208098 kubelet[662]: E0414 13:23:44.293462 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.793712 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:47 old-k8s-version-208098 kubelet[662]: E0414 13:23:47.683929 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.794231 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:48 old-k8s-version-208098 kubelet[662]: E0414 13:23:48.688109 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.794653 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:49 old-k8s-version-208098 kubelet[662]: E0414 13:23:49.691043 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.797699 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:56 old-k8s-version-208098 kubelet[662]: E0414 13:23:56.291650 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.798330 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:04 old-k8s-version-208098 kubelet[662]: E0414 13:24:04.746285 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.798758 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:07 old-k8s-version-208098 kubelet[662]: E0414 13:24:07.745784 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.798974 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:09 old-k8s-version-208098 kubelet[662]: E0414 13:24:09.281525 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.799187 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:21 old-k8s-version-208098 kubelet[662]: E0414 13:24:21.285170 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.799540 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:22 old-k8s-version-208098 kubelet[662]: E0414 13:24:22.280812 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.799792 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:34 old-k8s-version-208098 kubelet[662]: E0414 13:24:34.281206 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.800449 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:35 old-k8s-version-208098 kubelet[662]: E0414 13:24:35.832817 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.800855 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:37 old-k8s-version-208098 kubelet[662]: E0414 13:24:37.746200 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.803453 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:47 old-k8s-version-208098 kubelet[662]: E0414 13:24:47.292155 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.803821 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:53 old-k8s-version-208098 kubelet[662]: E0414 13:24:53.281914 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.804036 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:59 old-k8s-version-208098 kubelet[662]: E0414 13:24:59.281828 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.804464 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:05 old-k8s-version-208098 kubelet[662]: E0414 13:25:05.281366 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.804680 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:10 old-k8s-version-208098 kubelet[662]: E0414 13:25:10.281442 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.805291 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:17 old-k8s-version-208098 kubelet[662]: E0414 13:25:17.957823 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.805516 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:24 old-k8s-version-208098 kubelet[662]: E0414 13:25:24.281394 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.805919 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:27 old-k8s-version-208098 kubelet[662]: E0414 13:25:27.745793 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.806133 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:36 old-k8s-version-208098 kubelet[662]: E0414 13:25:36.281304 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.806488 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:39 old-k8s-version-208098 kubelet[662]: E0414 13:25:39.280829 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.806697 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:49 old-k8s-version-208098 kubelet[662]: E0414 13:25:49.285311 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.807084 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:54 old-k8s-version-208098 kubelet[662]: E0414 13:25:54.280872 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.807297 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:03 old-k8s-version-208098 kubelet[662]: E0414 13:26:03.281419 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.807652 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:09 old-k8s-version-208098 kubelet[662]: E0414 13:26:09.284722 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.810206 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:15 old-k8s-version-208098 kubelet[662]: E0414 13:26:15.299114 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.810567 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:24 old-k8s-version-208098 kubelet[662]: E0414 13:26:24.280839 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.810778 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:29 old-k8s-version-208098 kubelet[662]: E0414 13:26:29.281837 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.811390 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:40 old-k8s-version-208098 kubelet[662]: E0414 13:26:40.191923 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.811603 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:43 old-k8s-version-208098 kubelet[662]: E0414 13:26:43.281861 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.811956 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:47 old-k8s-version-208098 kubelet[662]: E0414 13:26:47.746404 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.812167 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:54 old-k8s-version-208098 kubelet[662]: E0414 13:26:54.281826 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.812522 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:02 old-k8s-version-208098 kubelet[662]: E0414 13:27:02.280879 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.812738 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:09 old-k8s-version-208098 kubelet[662]: E0414 13:27:09.285633 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.813091 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:15 old-k8s-version-208098 kubelet[662]: E0414 13:27:15.281925 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.813301 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:23 old-k8s-version-208098 kubelet[662]: E0414 13:27:23.281150 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.813696 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:28 old-k8s-version-208098 kubelet[662]: E0414 13:27:28.280821 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.813912 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:38 old-k8s-version-208098 kubelet[662]: E0414 13:27:38.281280 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.814265 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:42 old-k8s-version-208098 kubelet[662]: E0414 13:27:42.281001 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.814475 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:49 old-k8s-version-208098 kubelet[662]: E0414 13:27:49.281897 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.814826 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:54 old-k8s-version-208098 kubelet[662]: E0414 13:27:54.280822 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.815036 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:04 old-k8s-version-208098 kubelet[662]: E0414 13:28:04.281150 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.815387 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:07 old-k8s-version-208098 kubelet[662]: E0414 13:28:07.281556 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.815599 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:15 old-k8s-version-208098 kubelet[662]: E0414 13:28:15.282302 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.815951 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.816167 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.816542 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.816754 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:40 old-k8s-version-208098 kubelet[662]: E0414 13:28:40.281074 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.817104 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:48 old-k8s-version-208098 kubelet[662]: E0414 13:28:48.295936 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
I0414 13:28:49.817131 1087820 logs.go:123] Gathering logs for coredns [5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2] ...
I0414 13:28:49.817158 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2"
I0414 13:28:49.873896 1087820 out.go:358] Setting ErrFile to fd 2...
I0414 13:28:49.873968 1087820 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0414 13:28:49.874026 1087820 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0414 13:28:49.874041 1087820 out.go:270] Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.874047 1087820 out.go:270] Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.874062 1087820 out.go:270] Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.874067 1087820 out.go:270] Apr 14 13:28:40 old-k8s-version-208098 kubelet[662]: E0414 13:28:40.281074 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:40 old-k8s-version-208098 kubelet[662]: E0414 13:28:40.281074 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.874076 1087820 out.go:270] Apr 14 13:28:48 old-k8s-version-208098 kubelet[662]: E0414 13:28:48.295936 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:48 old-k8s-version-208098 kubelet[662]: E0414 13:28:48.295936 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
I0414 13:28:49.874086 1087820 out.go:358] Setting ErrFile to fd 2...
I0414 13:28:49.874091 1087820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:28:59.874717 1087820 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0414 13:28:59.885631 1087820 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0414 13:28:59.889194 1087820 out.go:201]
W0414 13:28:59.892239 1087820 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0414 13:28:59.892487 1087820 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0414 13:28:59.892544 1087820 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0414 13:28:59.892592 1087820 out.go:270] *
*
W0414 13:28:59.893569 1087820 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0414 13:28:59.895626 1087820 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-208098 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-208098
helpers_test.go:235: (dbg) docker inspect old-k8s-version-208098:
-- stdout --
[
{
"Id": "e2f605e0dc8df2917b4f8b3a24a90e2bd61af74b4b871cd8015aadea6a620de0",
"Created": "2025-04-14T13:20:09.221547033Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1087949,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-14T13:22:50.136341023Z",
"FinishedAt": "2025-04-14T13:22:49.226824599Z"
},
"Image": "sha256:e51065ad0661308920dfd7c7ddda445e530a6bf56321f8317cb47e1df0975e7c",
"ResolvConfPath": "/var/lib/docker/containers/e2f605e0dc8df2917b4f8b3a24a90e2bd61af74b4b871cd8015aadea6a620de0/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e2f605e0dc8df2917b4f8b3a24a90e2bd61af74b4b871cd8015aadea6a620de0/hostname",
"HostsPath": "/var/lib/docker/containers/e2f605e0dc8df2917b4f8b3a24a90e2bd61af74b4b871cd8015aadea6a620de0/hosts",
"LogPath": "/var/lib/docker/containers/e2f605e0dc8df2917b4f8b3a24a90e2bd61af74b4b871cd8015aadea6a620de0/e2f605e0dc8df2917b4f8b3a24a90e2bd61af74b4b871cd8015aadea6a620de0-json.log",
"Name": "/old-k8s-version-208098",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-208098:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-208098",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "e2f605e0dc8df2917b4f8b3a24a90e2bd61af74b4b871cd8015aadea6a620de0",
"LowerDir": "/var/lib/docker/overlay2/983df56c9c0fb32234dd4bf4bb1991bd4592c8100d3e5c09c25723e8fe970d39-init/diff:/var/lib/docker/overlay2/c22280caa7d64f6e7bf30f2504a2f19f9fb5a01bd9e91a1b9502c2a7be422bb3/diff",
"MergedDir": "/var/lib/docker/overlay2/983df56c9c0fb32234dd4bf4bb1991bd4592c8100d3e5c09c25723e8fe970d39/merged",
"UpperDir": "/var/lib/docker/overlay2/983df56c9c0fb32234dd4bf4bb1991bd4592c8100d3e5c09c25723e8fe970d39/diff",
"WorkDir": "/var/lib/docker/overlay2/983df56c9c0fb32234dd4bf4bb1991bd4592c8100d3e5c09c25723e8fe970d39/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "volume",
"Name": "old-k8s-version-208098",
"Source": "/var/lib/docker/volumes/old-k8s-version-208098/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "old-k8s-version-208098",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-208098",
"name.minikube.sigs.k8s.io": "old-k8s-version-208098",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f2ba52b9641ff30a645011a10a18110d553494ea453382695c689da5cfff16bf",
"SandboxKey": "/var/run/docker/netns/f2ba52b9641f",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34168"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34169"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34172"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34170"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34171"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-208098": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "d2:43:79:69:70:99",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "874f5127f736f30e6b759e5d3171cd2cd4ec26dd157c70b802fd34f651980516",
"EndpointID": "8cb11716f3cfc3c1198b4ab15bcc1122413cc172a5f10b846ec819c94b1882c3",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-208098",
"e2f605e0dc8d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-208098 -n old-k8s-version-208098
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-208098 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-208098 logs -n 25: (2.855448334s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| start | -p force-systemd-flag-932379 | force-systemd-flag-932379 | jenkins | v1.35.0 | 14 Apr 25 13:18 UTC | 14 Apr 25 13:19 UTC |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-flag-932379 | force-systemd-flag-932379 | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC | 14 Apr 25 13:19 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-flag-932379 | force-systemd-flag-932379 | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC | 14 Apr 25 13:19 UTC |
| start | -p cert-options-689007 | cert-options-689007 | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC | 14 Apr 25 13:19 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-689007 ssh | cert-options-689007 | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC | 14 Apr 25 13:19 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-689007 -- sudo | cert-options-689007 | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC | 14 Apr 25 13:19 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-689007 | cert-options-689007 | jenkins | v1.35.0 | 14 Apr 25 13:19 UTC | 14 Apr 25 13:20 UTC |
| start | -p old-k8s-version-208098 | old-k8s-version-208098 | jenkins | v1.35.0 | 14 Apr 25 13:20 UTC | 14 Apr 25 13:22 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-704522 | cert-expiration-704522 | jenkins | v1.35.0 | 14 Apr 25 13:21 UTC | 14 Apr 25 13:21 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-704522 | cert-expiration-704522 | jenkins | v1.35.0 | 14 Apr 25 13:21 UTC | 14 Apr 25 13:21 UTC |
| start | -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:21 UTC | 14 Apr 25 13:23 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-208098 | old-k8s-version-208098 | jenkins | v1.35.0 | 14 Apr 25 13:22 UTC | 14 Apr 25 13:22 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-208098 | old-k8s-version-208098 | jenkins | v1.35.0 | 14 Apr 25 13:22 UTC | 14 Apr 25 13:22 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-208098 | old-k8s-version-208098 | jenkins | v1.35.0 | 14 Apr 25 13:22 UTC | 14 Apr 25 13:22 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-208098 | old-k8s-version-208098 | jenkins | v1.35.0 | 14 Apr 25 13:22 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:23 UTC | 14 Apr 25 13:23 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:23 UTC | 14 Apr 25 13:23 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:23 UTC | 14 Apr 25 13:23 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:23 UTC | 14 Apr 25 13:28 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| image | no-preload-034779 image list | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:28 UTC | 14 Apr 25 13:28 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:28 UTC | 14 Apr 25 13:28 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:28 UTC | 14 Apr 25 13:28 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:28 UTC | 14 Apr 25 13:28 UTC |
| delete | -p no-preload-034779 | no-preload-034779 | jenkins | v1.35.0 | 14 Apr 25 13:28 UTC | 14 Apr 25 13:28 UTC |
| start | -p embed-certs-175663 | embed-certs-175663 | jenkins | v1.35.0 | 14 Apr 25 13:28 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/14 13:28:41
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0414 13:28:41.561324 1097960 out.go:345] Setting OutFile to fd 1 ...
I0414 13:28:41.561482 1097960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:28:41.561517 1097960 out.go:358] Setting ErrFile to fd 2...
I0414 13:28:41.561525 1097960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:28:41.561843 1097960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20384-872300/.minikube/bin
I0414 13:28:41.562339 1097960 out.go:352] Setting JSON to false
I0414 13:28:41.563432 1097960 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":18666,"bootTime":1744618656,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I0414 13:28:41.563504 1097960 start.go:139] virtualization:
I0414 13:28:41.567451 1097960 out.go:177] * [embed-certs-175663] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0414 13:28:41.570754 1097960 out.go:177] - MINIKUBE_LOCATION=20384
I0414 13:28:41.570789 1097960 notify.go:220] Checking for updates...
I0414 13:28:41.575007 1097960 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0414 13:28:41.578603 1097960 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20384-872300/kubeconfig
I0414 13:28:41.581648 1097960 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20384-872300/.minikube
I0414 13:28:41.585823 1097960 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0414 13:28:41.588967 1097960 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0414 13:28:41.593483 1097960 config.go:182] Loaded profile config "old-k8s-version-208098": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0414 13:28:41.593687 1097960 driver.go:394] Setting default libvirt URI to qemu:///system
I0414 13:28:41.627950 1097960 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0414 13:28:41.628084 1097960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0414 13:28:41.684258 1097960 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-14 13:28:41.673853384 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0414 13:28:41.684369 1097960 docker.go:318] overlay module found
I0414 13:28:41.687491 1097960 out.go:177] * Using the docker driver based on user configuration
I0414 13:28:41.690382 1097960 start.go:297] selected driver: docker
I0414 13:28:41.690404 1097960 start.go:901] validating driver "docker" against <nil>
I0414 13:28:41.690419 1097960 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0414 13:28:41.691146 1097960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0414 13:28:41.751019 1097960 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-14 13:28:41.741874298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Serv
erErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0414 13:28:41.751188 1097960 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0414 13:28:41.751418 1097960 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 13:28:41.754447 1097960 out.go:177] * Using Docker driver with root privileges
I0414 13:28:41.757400 1097960 cni.go:84] Creating CNI manager for ""
I0414 13:28:41.757473 1097960 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 13:28:41.757495 1097960 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0414 13:28:41.757583 1097960 start.go:340] cluster config:
{Name:embed-certs-175663 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-175663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 13:28:41.762719 1097960 out.go:177] * Starting "embed-certs-175663" primary control-plane node in "embed-certs-175663" cluster
I0414 13:28:41.765575 1097960 cache.go:121] Beginning downloading kic base image for docker with containerd
I0414 13:28:41.768596 1097960 out.go:177] * Pulling base image v0.0.46-1744107393-20604 ...
I0414 13:28:41.771512 1097960 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 13:28:41.771558 1097960 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
I0414 13:28:41.771578 1097960 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20384-872300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
I0414 13:28:41.771593 1097960 cache.go:56] Caching tarball of preloaded images
I0414 13:28:41.771692 1097960 preload.go:172] Found /home/jenkins/minikube-integration/20384-872300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0414 13:28:41.771705 1097960 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0414 13:28:41.771818 1097960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/config.json ...
I0414 13:28:41.771845 1097960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/config.json: {Name:mk517e70929c499e7b87bdc3fb67b18ca81da075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:28:41.792107 1097960 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon, skipping pull
I0414 13:28:41.792134 1097960 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in daemon, skipping load
I0414 13:28:41.792149 1097960 cache.go:230] Successfully downloaded all kic artifacts
I0414 13:28:41.792181 1097960 start.go:360] acquireMachinesLock for embed-certs-175663: {Name:mka3dd8ab267fcf7d72b59e62ef0219843b8d696 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 13:28:41.792293 1097960 start.go:364] duration metric: took 83.94µs to acquireMachinesLock for "embed-certs-175663"
I0414 13:28:41.792326 1097960 start.go:93] Provisioning new machine with config: &{Name:embed-certs-175663 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-175663 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 13:28:41.792401 1097960 start.go:125] createHost starting for "" (driver="docker")
I0414 13:28:41.795924 1097960 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0414 13:28:41.796171 1097960 start.go:159] libmachine.API.Create for "embed-certs-175663" (driver="docker")
I0414 13:28:41.796224 1097960 client.go:168] LocalClient.Create starting
I0414 13:28:41.796324 1097960 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem
I0414 13:28:41.796363 1097960 main.go:141] libmachine: Decoding PEM data...
I0414 13:28:41.796381 1097960 main.go:141] libmachine: Parsing certificate...
I0414 13:28:41.796446 1097960 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20384-872300/.minikube/certs/cert.pem
I0414 13:28:41.796542 1097960 main.go:141] libmachine: Decoding PEM data...
I0414 13:28:41.796559 1097960 main.go:141] libmachine: Parsing certificate...
I0414 13:28:41.796980 1097960 cli_runner.go:164] Run: docker network inspect embed-certs-175663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0414 13:28:41.813533 1097960 cli_runner.go:211] docker network inspect embed-certs-175663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0414 13:28:41.813650 1097960 network_create.go:284] running [docker network inspect embed-certs-175663] to gather additional debugging logs...
I0414 13:28:41.813673 1097960 cli_runner.go:164] Run: docker network inspect embed-certs-175663
W0414 13:28:41.829305 1097960 cli_runner.go:211] docker network inspect embed-certs-175663 returned with exit code 1
I0414 13:28:41.829343 1097960 network_create.go:287] error running [docker network inspect embed-certs-175663]: docker network inspect embed-certs-175663: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-175663 not found
I0414 13:28:41.829358 1097960 network_create.go:289] output of [docker network inspect embed-certs-175663]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-175663 not found
** /stderr **
I0414 13:28:41.829467 1097960 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0414 13:28:41.846629 1097960 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-fc5b53e0f417 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:52:f3:77:a5:59:e2} reservation:<nil>}
I0414 13:28:41.846918 1097960 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcaeb29dabf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:0c:b5:f0:5b:97} reservation:<nil>}
I0414 13:28:41.847255 1097960 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-40ad7b69ff41 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:12:c3:e4:cb:d0:53} reservation:<nil>}
I0414 13:28:41.847671 1097960 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c73e0}
I0414 13:28:41.847696 1097960 network_create.go:124] attempt to create docker network embed-certs-175663 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I0414 13:28:41.847757 1097960 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-175663 embed-certs-175663
I0414 13:28:41.916582 1097960 network_create.go:108] docker network embed-certs-175663 192.168.76.0/24 created
I0414 13:28:41.916618 1097960 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-175663" container
I0414 13:28:41.916707 1097960 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0414 13:28:41.933543 1097960 cli_runner.go:164] Run: docker volume create embed-certs-175663 --label name.minikube.sigs.k8s.io=embed-certs-175663 --label created_by.minikube.sigs.k8s.io=true
I0414 13:28:41.957887 1097960 oci.go:103] Successfully created a docker volume embed-certs-175663
I0414 13:28:41.958012 1097960 cli_runner.go:164] Run: docker run --rm --name embed-certs-175663-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-175663 --entrypoint /usr/bin/test -v embed-certs-175663:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -d /var/lib
I0414 13:28:42.602336 1097960 oci.go:107] Successfully prepared a docker volume embed-certs-175663
I0414 13:28:42.602390 1097960 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 13:28:42.602411 1097960 kic.go:194] Starting extracting preloaded images to volume ...
I0414 13:28:42.602482 1097960 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20384-872300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-175663:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir
I0414 13:28:47.141862 1087820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0414 13:28:47.154963 1087820 api_server.go:72] duration metric: took 5m49.870694905s to wait for apiserver process to appear ...
I0414 13:28:47.154990 1087820 api_server.go:88] waiting for apiserver healthz status ...
I0414 13:28:47.155026 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0414 13:28:47.155083 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0414 13:28:47.195449 1087820 cri.go:89] found id: "b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e"
I0414 13:28:47.195470 1087820 cri.go:89] found id: "7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717"
I0414 13:28:47.195475 1087820 cri.go:89] found id: ""
I0414 13:28:47.195483 1087820 logs.go:282] 2 containers: [b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e 7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717]
I0414 13:28:47.195541 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.199684 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.203283 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0414 13:28:47.203356 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0414 13:28:47.246189 1087820 cri.go:89] found id: "ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa"
I0414 13:28:47.246212 1087820 cri.go:89] found id: "5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87"
I0414 13:28:47.246217 1087820 cri.go:89] found id: ""
I0414 13:28:47.246224 1087820 logs.go:282] 2 containers: [ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa 5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87]
I0414 13:28:47.246279 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.250403 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.254165 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0414 13:28:47.254247 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0414 13:28:47.301169 1087820 cri.go:89] found id: "5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2"
I0414 13:28:47.301193 1087820 cri.go:89] found id: "69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652"
I0414 13:28:47.301198 1087820 cri.go:89] found id: ""
I0414 13:28:47.301205 1087820 logs.go:282] 2 containers: [5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2 69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652]
I0414 13:28:47.301264 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.305335 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.308966 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0414 13:28:47.309073 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0414 13:28:47.355346 1087820 cri.go:89] found id: "7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f"
I0414 13:28:47.355414 1087820 cri.go:89] found id: "102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588"
I0414 13:28:47.355434 1087820 cri.go:89] found id: ""
I0414 13:28:47.355449 1087820 logs.go:282] 2 containers: [7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f 102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588]
I0414 13:28:47.355544 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.359479 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.363132 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0414 13:28:47.363211 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0414 13:28:47.430687 1087820 cri.go:89] found id: "309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f"
I0414 13:28:47.430712 1087820 cri.go:89] found id: "8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a"
I0414 13:28:47.430719 1087820 cri.go:89] found id: ""
I0414 13:28:47.430727 1087820 logs.go:282] 2 containers: [309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f 8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a]
I0414 13:28:47.430786 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.435414 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.442224 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0414 13:28:47.442318 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0414 13:28:47.507894 1087820 cri.go:89] found id: "a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150"
I0414 13:28:47.507919 1087820 cri.go:89] found id: "ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa"
I0414 13:28:47.507924 1087820 cri.go:89] found id: ""
I0414 13:28:47.507931 1087820 logs.go:282] 2 containers: [a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150 ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa]
I0414 13:28:47.508012 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.511936 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.515642 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0414 13:28:47.515712 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0414 13:28:47.598489 1087820 cri.go:89] found id: "ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110"
I0414 13:28:47.598513 1087820 cri.go:89] found id: "2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832"
I0414 13:28:47.598518 1087820 cri.go:89] found id: ""
I0414 13:28:47.598526 1087820 logs.go:282] 2 containers: [ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110 2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832]
I0414 13:28:47.598581 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.602836 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.607458 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0414 13:28:47.607527 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0414 13:28:47.685395 1087820 cri.go:89] found id: "550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c"
I0414 13:28:47.685415 1087820 cri.go:89] found id: ""
I0414 13:28:47.685424 1087820 logs.go:282] 1 containers: [550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c]
I0414 13:28:47.685511 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.690779 1087820 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0414 13:28:47.690852 1087820 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0414 13:28:47.748780 1087820 cri.go:89] found id: "56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c"
I0414 13:28:47.748806 1087820 cri.go:89] found id: "f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6"
I0414 13:28:47.748811 1087820 cri.go:89] found id: ""
I0414 13:28:47.748818 1087820 logs.go:282] 2 containers: [56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6]
I0414 13:28:47.748873 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.752966 1087820 ssh_runner.go:195] Run: which crictl
I0414 13:28:47.756860 1087820 logs.go:123] Gathering logs for kube-scheduler [7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f] ...
I0414 13:28:47.756925 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f"
I0414 13:28:47.811393 1087820 logs.go:123] Gathering logs for kube-proxy [8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a] ...
I0414 13:28:47.811424 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a"
I0414 13:28:47.929548 1087820 logs.go:123] Gathering logs for kube-controller-manager [a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150] ...
I0414 13:28:47.929579 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150"
I0414 13:28:48.292975 1087820 logs.go:123] Gathering logs for kube-controller-manager [ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa] ...
I0414 13:28:48.293019 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa"
I0414 13:28:48.416421 1087820 logs.go:123] Gathering logs for kubernetes-dashboard [550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c] ...
I0414 13:28:48.416455 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c"
I0414 13:28:48.542826 1087820 logs.go:123] Gathering logs for containerd ...
I0414 13:28:48.542864 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0414 13:28:48.690583 1087820 logs.go:123] Gathering logs for storage-provisioner [f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6] ...
I0414 13:28:48.690668 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6"
I0414 13:28:48.799562 1087820 logs.go:123] Gathering logs for describe nodes ...
I0414 13:28:48.799592 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0414 13:28:49.020857 1087820 logs.go:123] Gathering logs for kube-apiserver [b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e] ...
I0414 13:28:49.020889 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e"
I0414 13:28:49.170699 1087820 logs.go:123] Gathering logs for kube-apiserver [7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717] ...
I0414 13:28:49.170739 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717"
I0414 13:28:49.247835 1087820 logs.go:123] Gathering logs for etcd [ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa] ...
I0414 13:28:49.247872 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa"
I0414 13:28:49.305040 1087820 logs.go:123] Gathering logs for kindnet [ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110] ...
I0414 13:28:49.305117 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110"
I0414 13:28:49.355632 1087820 logs.go:123] Gathering logs for kindnet [2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832] ...
I0414 13:28:49.355664 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832"
I0414 13:28:49.399256 1087820 logs.go:123] Gathering logs for container status ...
I0414 13:28:49.399284 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0414 13:28:49.448958 1087820 logs.go:123] Gathering logs for dmesg ...
I0414 13:28:49.448991 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0414 13:28:49.468450 1087820 logs.go:123] Gathering logs for etcd [5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87] ...
I0414 13:28:49.468522 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87"
I0414 13:28:49.510148 1087820 logs.go:123] Gathering logs for coredns [69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652] ...
I0414 13:28:49.510177 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652"
I0414 13:28:49.557180 1087820 logs.go:123] Gathering logs for kube-scheduler [102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588] ...
I0414 13:28:49.557258 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588"
I0414 13:28:49.618798 1087820 logs.go:123] Gathering logs for kube-proxy [309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f] ...
I0414 13:28:49.618830 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f"
I0414 13:28:49.676689 1087820 logs.go:123] Gathering logs for storage-provisioner [56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c] ...
I0414 13:28:49.676720 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c"
I0414 13:28:49.718944 1087820 logs.go:123] Gathering logs for kubelet ...
I0414 13:28:49.718972 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0414 13:28:49.771258 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.182510 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-wwbtf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-wwbtf" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.771714 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234087 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.771963 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234165 662 reflector.go:138] object-"kube-system"/"kindnet-token-nhv25": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nhv25" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.772203 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234223 662 reflector.go:138] object-"kube-system"/"coredns-token-hhfd8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-hhfd8" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.772473 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234276 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.772748 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.234329 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-9w6cs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-9w6cs" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.777845 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.388675 662 reflector.go:138] object-"default"/"default-token-lkd8z": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-lkd8z" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.778124 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:14 old-k8s-version-208098 kubelet[662]: E0414 13:23:14.388739 662 reflector.go:138] object-"kube-system"/"metrics-server-token-fw4rr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fw4rr" is forbidden: User "system:node:old-k8s-version-208098" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-208098' and this object
W0414 13:28:49.787529 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:17 old-k8s-version-208098 kubelet[662]: E0414 13:23:17.967296 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.787837 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:18 old-k8s-version-208098 kubelet[662]: E0414 13:23:18.520093 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.792155 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:33 old-k8s-version-208098 kubelet[662]: E0414 13:23:33.291389 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.792816 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:44 old-k8s-version-208098 kubelet[662]: E0414 13:23:44.293462 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.793712 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:47 old-k8s-version-208098 kubelet[662]: E0414 13:23:47.683929 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.794231 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:48 old-k8s-version-208098 kubelet[662]: E0414 13:23:48.688109 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.794653 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:49 old-k8s-version-208098 kubelet[662]: E0414 13:23:49.691043 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.797699 1087820 logs.go:138] Found kubelet problem: Apr 14 13:23:56 old-k8s-version-208098 kubelet[662]: E0414 13:23:56.291650 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.798330 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:04 old-k8s-version-208098 kubelet[662]: E0414 13:24:04.746285 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.798758 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:07 old-k8s-version-208098 kubelet[662]: E0414 13:24:07.745784 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.798974 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:09 old-k8s-version-208098 kubelet[662]: E0414 13:24:09.281525 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.799187 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:21 old-k8s-version-208098 kubelet[662]: E0414 13:24:21.285170 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.799540 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:22 old-k8s-version-208098 kubelet[662]: E0414 13:24:22.280812 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.799792 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:34 old-k8s-version-208098 kubelet[662]: E0414 13:24:34.281206 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.800449 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:35 old-k8s-version-208098 kubelet[662]: E0414 13:24:35.832817 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.800855 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:37 old-k8s-version-208098 kubelet[662]: E0414 13:24:37.746200 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
I0414 13:28:47.369423 1097960 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20384-872300/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-175663:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir: (4.766898803s)
I0414 13:28:47.369470 1097960 kic.go:203] duration metric: took 4.767055383s to extract preloaded images to volume ...
W0414 13:28:47.369723 1097960 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0414 13:28:47.369839 1097960 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0414 13:28:47.467227 1097960 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-175663 --name embed-certs-175663 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-175663 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-175663 --network embed-certs-175663 --ip 192.168.76.2 --volume embed-certs-175663:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a
I0414 13:28:47.860157 1097960 cli_runner.go:164] Run: docker container inspect embed-certs-175663 --format={{.State.Running}}
I0414 13:28:47.882999 1097960 cli_runner.go:164] Run: docker container inspect embed-certs-175663 --format={{.State.Status}}
I0414 13:28:47.915356 1097960 cli_runner.go:164] Run: docker exec embed-certs-175663 stat /var/lib/dpkg/alternatives/iptables
I0414 13:28:47.979994 1097960 oci.go:144] the created container "embed-certs-175663" has a running status.
I0414 13:28:47.980023 1097960 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20384-872300/.minikube/machines/embed-certs-175663/id_rsa...
I0414 13:28:48.577075 1097960 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20384-872300/.minikube/machines/embed-certs-175663/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0414 13:28:48.603860 1097960 cli_runner.go:164] Run: docker container inspect embed-certs-175663 --format={{.State.Status}}
I0414 13:28:48.627715 1097960 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0414 13:28:48.627734 1097960 kic_runner.go:114] Args: [docker exec --privileged embed-certs-175663 chown docker:docker /home/docker/.ssh/authorized_keys]
I0414 13:28:48.699808 1097960 cli_runner.go:164] Run: docker container inspect embed-certs-175663 --format={{.State.Status}}
I0414 13:28:48.729599 1097960 machine.go:93] provisionDockerMachine start ...
I0414 13:28:48.729707 1097960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175663
I0414 13:28:48.757317 1097960 main.go:141] libmachine: Using SSH client type: native
I0414 13:28:48.757722 1097960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34178 <nil> <nil>}
I0414 13:28:48.757734 1097960 main.go:141] libmachine: About to run SSH command:
hostname
I0414 13:28:48.758275 1097960 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52496->127.0.0.1:34178: read: connection reset by peer
I0414 13:28:51.885403 1097960 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175663
I0414 13:28:51.885435 1097960 ubuntu.go:169] provisioning hostname "embed-certs-175663"
I0414 13:28:51.885524 1097960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175663
I0414 13:28:51.904586 1097960 main.go:141] libmachine: Using SSH client type: native
I0414 13:28:51.904905 1097960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34178 <nil> <nil>}
I0414 13:28:51.904924 1097960 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-175663 && echo "embed-certs-175663" | sudo tee /etc/hostname
I0414 13:28:52.053190 1097960 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-175663
I0414 13:28:52.053267 1097960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175663
I0414 13:28:52.073588 1097960 main.go:141] libmachine: Using SSH client type: native
I0414 13:28:52.073931 1097960 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34178 <nil> <nil>}
I0414 13:28:52.073960 1097960 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-175663' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-175663/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-175663' | sudo tee -a /etc/hosts;
fi
fi
I0414 13:28:52.201962 1097960 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 13:28:52.202001 1097960 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20384-872300/.minikube CaCertPath:/home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20384-872300/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20384-872300/.minikube}
I0414 13:28:52.202023 1097960 ubuntu.go:177] setting up certificates
I0414 13:28:52.202033 1097960 provision.go:84] configureAuth start
I0414 13:28:52.202094 1097960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175663
I0414 13:28:52.221171 1097960 provision.go:143] copyHostCerts
I0414 13:28:52.221240 1097960 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-872300/.minikube/ca.pem, removing ...
I0414 13:28:52.221249 1097960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-872300/.minikube/ca.pem
I0414 13:28:52.221334 1097960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20384-872300/.minikube/ca.pem (1082 bytes)
I0414 13:28:52.221442 1097960 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-872300/.minikube/cert.pem, removing ...
I0414 13:28:52.221453 1097960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-872300/.minikube/cert.pem
I0414 13:28:52.221482 1097960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20384-872300/.minikube/cert.pem (1123 bytes)
I0414 13:28:52.221575 1097960 exec_runner.go:144] found /home/jenkins/minikube-integration/20384-872300/.minikube/key.pem, removing ...
I0414 13:28:52.221585 1097960 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20384-872300/.minikube/key.pem
I0414 13:28:52.221647 1097960 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20384-872300/.minikube/key.pem (1675 bytes)
I0414 13:28:52.221708 1097960 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20384-872300/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca-key.pem org=jenkins.embed-certs-175663 san=[127.0.0.1 192.168.76.2 embed-certs-175663 localhost minikube]
I0414 13:28:52.651371 1097960 provision.go:177] copyRemoteCerts
I0414 13:28:52.651451 1097960 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 13:28:52.651493 1097960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175663
I0414 13:28:52.669429 1097960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/embed-certs-175663/id_rsa Username:docker}
I0414 13:28:52.763962 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 13:28:52.791165 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0414 13:28:52.817541 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0414 13:28:52.847999 1097960 provision.go:87] duration metric: took 645.952703ms to configureAuth
I0414 13:28:52.848025 1097960 ubuntu.go:193] setting minikube options for container-runtime
I0414 13:28:52.848235 1097960 config.go:182] Loaded profile config "embed-certs-175663": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 13:28:52.848251 1097960 machine.go:96] duration metric: took 4.118614826s to provisionDockerMachine
I0414 13:28:52.848258 1097960 client.go:171] duration metric: took 11.052022767s to LocalClient.Create
I0414 13:28:52.848271 1097960 start.go:167] duration metric: took 11.052102464s to libmachine.API.Create "embed-certs-175663"
I0414 13:28:52.848279 1097960 start.go:293] postStartSetup for "embed-certs-175663" (driver="docker")
I0414 13:28:52.848290 1097960 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 13:28:52.848341 1097960 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 13:28:52.848389 1097960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175663
I0414 13:28:52.867203 1097960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/embed-certs-175663/id_rsa Username:docker}
I0414 13:28:52.959779 1097960 ssh_runner.go:195] Run: cat /etc/os-release
I0414 13:28:52.963104 1097960 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0414 13:28:52.963141 1097960 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0414 13:28:52.963153 1097960 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0414 13:28:52.963160 1097960 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0414 13:28:52.963171 1097960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-872300/.minikube/addons for local assets ...
I0414 13:28:52.963240 1097960 filesync.go:126] Scanning /home/jenkins/minikube-integration/20384-872300/.minikube/files for local assets ...
I0414 13:28:52.963327 1097960 filesync.go:149] local asset: /home/jenkins/minikube-integration/20384-872300/.minikube/files/etc/ssl/certs/8777952.pem -> 8777952.pem in /etc/ssl/certs
I0414 13:28:52.963434 1097960 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 13:28:52.972618 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/files/etc/ssl/certs/8777952.pem --> /etc/ssl/certs/8777952.pem (1708 bytes)
I0414 13:28:52.997707 1097960 start.go:296] duration metric: took 149.412992ms for postStartSetup
I0414 13:28:52.998139 1097960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175663
I0414 13:28:53.017459 1097960 profile.go:143] Saving config to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/config.json ...
I0414 13:28:53.017864 1097960 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0414 13:28:53.017918 1097960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175663
I0414 13:28:53.037333 1097960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/embed-certs-175663/id_rsa Username:docker}
I0414 13:28:53.126590 1097960 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0414 13:28:53.131295 1097960 start.go:128] duration metric: took 11.338879351s to createHost
I0414 13:28:53.131323 1097960 start.go:83] releasing machines lock for "embed-certs-175663", held for 11.339014434s
I0414 13:28:53.131396 1097960 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-175663
I0414 13:28:53.148696 1097960 ssh_runner.go:195] Run: cat /version.json
I0414 13:28:53.148752 1097960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175663
I0414 13:28:53.149000 1097960 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 13:28:53.149071 1097960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-175663
I0414 13:28:53.167676 1097960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/embed-certs-175663/id_rsa Username:docker}
I0414 13:28:53.177197 1097960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34178 SSHKeyPath:/home/jenkins/minikube-integration/20384-872300/.minikube/machines/embed-certs-175663/id_rsa Username:docker}
I0414 13:28:53.403938 1097960 ssh_runner.go:195] Run: systemctl --version
I0414 13:28:53.408439 1097960 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0414 13:28:53.412742 1097960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0414 13:28:53.438637 1097960 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0414 13:28:53.438744 1097960 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 13:28:53.470810 1097960 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0414 13:28:53.470835 1097960 start.go:495] detecting cgroup driver to use...
I0414 13:28:53.470887 1097960 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0414 13:28:53.470955 1097960 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 13:28:53.484440 1097960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 13:28:53.496711 1097960 docker.go:217] disabling cri-docker service (if available) ...
I0414 13:28:53.496834 1097960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 13:28:53.511079 1097960 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 13:28:53.526238 1097960 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 13:28:53.630550 1097960 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 13:28:53.726208 1097960 docker.go:233] disabling docker service ...
I0414 13:28:53.726327 1097960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 13:28:53.749856 1097960 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 13:28:53.763142 1097960 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 13:28:53.851036 1097960 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 13:28:53.949442 1097960 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 13:28:53.962390 1097960 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 13:28:53.981763 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0414 13:28:53.992659 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 13:28:54.004399 1097960 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 13:28:54.004543 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 13:28:54.018693 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 13:28:54.030697 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 13:28:54.042919 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 13:28:54.055420 1097960 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 13:28:54.066428 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 13:28:54.077701 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0414 13:28:54.090573 1097960 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0414 13:28:54.101290 1097960 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 13:28:54.110984 1097960 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 13:28:54.120423 1097960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 13:28:54.214654 1097960 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 13:28:54.363201 1097960 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 13:28:54.363293 1097960 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 13:28:54.367298 1097960 start.go:563] Will wait 60s for crictl version
I0414 13:28:54.367375 1097960 ssh_runner.go:195] Run: which crictl
I0414 13:28:54.371189 1097960 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 13:28:54.415533 1097960 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0414 13:28:54.415618 1097960 ssh_runner.go:195] Run: containerd --version
I0414 13:28:54.440625 1097960 ssh_runner.go:195] Run: containerd --version
I0414 13:28:54.471440 1097960 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.27 ...
W0414 13:28:49.803453 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:47 old-k8s-version-208098 kubelet[662]: E0414 13:24:47.292155 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.803821 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:53 old-k8s-version-208098 kubelet[662]: E0414 13:24:53.281914 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.804036 1087820 logs.go:138] Found kubelet problem: Apr 14 13:24:59 old-k8s-version-208098 kubelet[662]: E0414 13:24:59.281828 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.804464 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:05 old-k8s-version-208098 kubelet[662]: E0414 13:25:05.281366 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.804680 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:10 old-k8s-version-208098 kubelet[662]: E0414 13:25:10.281442 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.805291 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:17 old-k8s-version-208098 kubelet[662]: E0414 13:25:17.957823 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.805516 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:24 old-k8s-version-208098 kubelet[662]: E0414 13:25:24.281394 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.805919 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:27 old-k8s-version-208098 kubelet[662]: E0414 13:25:27.745793 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.806133 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:36 old-k8s-version-208098 kubelet[662]: E0414 13:25:36.281304 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.806488 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:39 old-k8s-version-208098 kubelet[662]: E0414 13:25:39.280829 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.806697 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:49 old-k8s-version-208098 kubelet[662]: E0414 13:25:49.285311 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.807084 1087820 logs.go:138] Found kubelet problem: Apr 14 13:25:54 old-k8s-version-208098 kubelet[662]: E0414 13:25:54.280872 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.807297 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:03 old-k8s-version-208098 kubelet[662]: E0414 13:26:03.281419 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.807652 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:09 old-k8s-version-208098 kubelet[662]: E0414 13:26:09.284722 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.810206 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:15 old-k8s-version-208098 kubelet[662]: E0414 13:26:15.299114 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W0414 13:28:49.810567 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:24 old-k8s-version-208098 kubelet[662]: E0414 13:26:24.280839 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.810778 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:29 old-k8s-version-208098 kubelet[662]: E0414 13:26:29.281837 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.811390 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:40 old-k8s-version-208098 kubelet[662]: E0414 13:26:40.191923 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.811603 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:43 old-k8s-version-208098 kubelet[662]: E0414 13:26:43.281861 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.811956 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:47 old-k8s-version-208098 kubelet[662]: E0414 13:26:47.746404 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.812167 1087820 logs.go:138] Found kubelet problem: Apr 14 13:26:54 old-k8s-version-208098 kubelet[662]: E0414 13:26:54.281826 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.812522 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:02 old-k8s-version-208098 kubelet[662]: E0414 13:27:02.280879 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.812738 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:09 old-k8s-version-208098 kubelet[662]: E0414 13:27:09.285633 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.813091 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:15 old-k8s-version-208098 kubelet[662]: E0414 13:27:15.281925 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.813301 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:23 old-k8s-version-208098 kubelet[662]: E0414 13:27:23.281150 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.813696 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:28 old-k8s-version-208098 kubelet[662]: E0414 13:27:28.280821 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.813912 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:38 old-k8s-version-208098 kubelet[662]: E0414 13:27:38.281280 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.814265 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:42 old-k8s-version-208098 kubelet[662]: E0414 13:27:42.281001 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.814475 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:49 old-k8s-version-208098 kubelet[662]: E0414 13:27:49.281897 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.814826 1087820 logs.go:138] Found kubelet problem: Apr 14 13:27:54 old-k8s-version-208098 kubelet[662]: E0414 13:27:54.280822 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.815036 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:04 old-k8s-version-208098 kubelet[662]: E0414 13:28:04.281150 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.815387 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:07 old-k8s-version-208098 kubelet[662]: E0414 13:28:07.281556 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.815599 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:15 old-k8s-version-208098 kubelet[662]: E0414 13:28:15.282302 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.815951 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.816167 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.816542 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.816754 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:40 old-k8s-version-208098 kubelet[662]: E0414 13:28:40.281074 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.817104 1087820 logs.go:138] Found kubelet problem: Apr 14 13:28:48 old-k8s-version-208098 kubelet[662]: E0414 13:28:48.295936 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
I0414 13:28:49.817131 1087820 logs.go:123] Gathering logs for coredns [5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2] ...
I0414 13:28:49.817158 1087820 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2"
I0414 13:28:49.873896 1087820 out.go:358] Setting ErrFile to fd 2...
I0414 13:28:49.873968 1087820 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0414 13:28:49.874026 1087820 out.go:270] X Problems detected in kubelet:
W0414 13:28:49.874041 1087820 out.go:270] Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.874047 1087820 out.go:270] Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.874062 1087820 out.go:270] Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
W0414 13:28:49.874067 1087820 out.go:270] Apr 14 13:28:40 old-k8s-version-208098 kubelet[662]: E0414 13:28:40.281074 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 13:28:49.874076 1087820 out.go:270] Apr 14 13:28:48 old-k8s-version-208098 kubelet[662]: E0414 13:28:48.295936 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
I0414 13:28:49.874086 1087820 out.go:358] Setting ErrFile to fd 2...
I0414 13:28:49.874091 1087820 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 13:28:54.474334 1097960 cli_runner.go:164] Run: docker network inspect embed-certs-175663 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0414 13:28:54.491826 1097960 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0414 13:28:54.495754 1097960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 13:28:54.507187 1097960 kubeadm.go:883] updating cluster {Name:embed-certs-175663 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-175663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0414 13:28:54.507305 1097960 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 13:28:54.507381 1097960 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 13:28:54.549140 1097960 containerd.go:627] all images are preloaded for containerd runtime.
I0414 13:28:54.549162 1097960 containerd.go:534] Images already preloaded, skipping extraction
I0414 13:28:54.549223 1097960 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 13:28:54.594779 1097960 containerd.go:627] all images are preloaded for containerd runtime.
I0414 13:28:54.594799 1097960 cache_images.go:84] Images are preloaded, skipping loading
I0414 13:28:54.594807 1097960 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.32.2 containerd true true} ...
I0414 13:28:54.594899 1097960 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-175663 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-175663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 13:28:54.594961 1097960 ssh_runner.go:195] Run: sudo crictl info
I0414 13:28:54.639948 1097960 cni.go:84] Creating CNI manager for ""
I0414 13:28:54.639976 1097960 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 13:28:54.639988 1097960 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0414 13:28:54.640010 1097960 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-175663 NodeName:embed-certs-175663 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0414 13:28:54.640135 1097960 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-175663"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0414 13:28:54.640215 1097960 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0414 13:28:54.649809 1097960 binaries.go:44] Found k8s binaries, skipping transfer
I0414 13:28:54.649878 1097960 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0414 13:28:54.659645 1097960 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0414 13:28:54.678966 1097960 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0414 13:28:54.699142 1097960 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0414 13:28:54.718269 1097960 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0414 13:28:54.722098 1097960 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 13:28:54.734325 1097960 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 13:28:54.833425 1097960 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 13:28:54.850028 1097960 certs.go:68] Setting up /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663 for IP: 192.168.76.2
I0414 13:28:54.850100 1097960 certs.go:194] generating shared ca certs ...
I0414 13:28:54.850132 1097960 certs.go:226] acquiring lock for ca certs: {Name:mk6c53e70c2e2090a74ed171d7f164ad48f748f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:28:54.850340 1097960 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20384-872300/.minikube/ca.key
I0414 13:28:54.850439 1097960 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20384-872300/.minikube/proxy-client-ca.key
I0414 13:28:54.850499 1097960 certs.go:256] generating profile certs ...
I0414 13:28:54.850588 1097960 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/client.key
I0414 13:28:54.850646 1097960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/client.crt with IP's: []
I0414 13:28:55.030797 1097960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/client.crt ...
I0414 13:28:55.030836 1097960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/client.crt: {Name:mk948c3f27fd1ec58d20d74d2a0218a1acc10260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:28:55.031051 1097960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/client.key ...
I0414 13:28:55.031066 1097960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/client.key: {Name:mk6bc10d4ab02ec7905b6bb80fee7f7fbed61da4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:28:55.031169 1097960 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.key.d1128a7e
I0414 13:28:55.031187 1097960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.crt.d1128a7e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I0414 13:28:55.417131 1097960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.crt.d1128a7e ...
I0414 13:28:55.417163 1097960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.crt.d1128a7e: {Name:mk17f3f919485135fe14c8062be3dd07476bc7c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:28:55.417353 1097960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.key.d1128a7e ...
I0414 13:28:55.417370 1097960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.key.d1128a7e: {Name:mke603a3a7162b4ed6a77dfead40397e00e3b101 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:28:55.417454 1097960 certs.go:381] copying /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.crt.d1128a7e -> /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.crt
I0414 13:28:55.417547 1097960 certs.go:385] copying /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.key.d1128a7e -> /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.key
I0414 13:28:55.417630 1097960 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/proxy-client.key
I0414 13:28:55.417648 1097960 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/proxy-client.crt with IP's: []
I0414 13:28:55.500050 1097960 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/proxy-client.crt ...
I0414 13:28:55.500078 1097960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/proxy-client.crt: {Name:mkd6c8abaa9997193f3e98bf4c66ad7f9c1de2b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:28:55.500255 1097960 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/proxy-client.key ...
I0414 13:28:55.500268 1097960 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/proxy-client.key: {Name:mk8b495b595f4b766a9e66d37d3f3b9b476620d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 13:28:55.500449 1097960 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/877795.pem (1338 bytes)
W0414 13:28:55.500491 1097960 certs.go:480] ignoring /home/jenkins/minikube-integration/20384-872300/.minikube/certs/877795_empty.pem, impossibly tiny 0 bytes
I0414 13:28:55.500501 1097960 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca-key.pem (1679 bytes)
I0414 13:28:55.500523 1097960 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/ca.pem (1082 bytes)
I0414 13:28:55.500551 1097960 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/cert.pem (1123 bytes)
I0414 13:28:55.500576 1097960 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/certs/key.pem (1675 bytes)
I0414 13:28:55.500622 1097960 certs.go:484] found cert: /home/jenkins/minikube-integration/20384-872300/.minikube/files/etc/ssl/certs/8777952.pem (1708 bytes)
I0414 13:28:55.501206 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 13:28:55.528317 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 13:28:55.561829 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 13:28:55.596082 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 13:28:55.622146 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0414 13:28:55.648400 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0414 13:28:55.677754 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 13:28:55.710378 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/profiles/embed-certs-175663/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0414 13:28:55.738904 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/files/etc/ssl/certs/8777952.pem --> /usr/share/ca-certificates/8777952.pem (1708 bytes)
I0414 13:28:55.769349 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 13:28:55.794706 1097960 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20384-872300/.minikube/certs/877795.pem --> /usr/share/ca-certificates/877795.pem (1338 bytes)
I0414 13:28:55.820400 1097960 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0414 13:28:55.838835 1097960 ssh_runner.go:195] Run: openssl version
I0414 13:28:55.844510 1097960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 13:28:55.854455 1097960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 13:28:55.858266 1097960 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 12:36 /usr/share/ca-certificates/minikubeCA.pem
I0414 13:28:55.858360 1097960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 13:28:55.865524 1097960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 13:28:55.875248 1097960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/877795.pem && ln -fs /usr/share/ca-certificates/877795.pem /etc/ssl/certs/877795.pem"
I0414 13:28:55.885083 1097960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/877795.pem
I0414 13:28:55.888846 1097960 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 12:44 /usr/share/ca-certificates/877795.pem
I0414 13:28:55.888916 1097960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/877795.pem
I0414 13:28:55.896339 1097960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/877795.pem /etc/ssl/certs/51391683.0"
I0414 13:28:55.905806 1097960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8777952.pem && ln -fs /usr/share/ca-certificates/8777952.pem /etc/ssl/certs/8777952.pem"
I0414 13:28:55.915493 1097960 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8777952.pem
I0414 13:28:55.919477 1097960 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 12:44 /usr/share/ca-certificates/8777952.pem
I0414 13:28:55.919588 1097960 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8777952.pem
I0414 13:28:55.926821 1097960 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/8777952.pem /etc/ssl/certs/3ec20f2e.0"
I0414 13:28:55.936431 1097960 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 13:28:55.940166 1097960 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0414 13:28:55.940227 1097960 kubeadm.go:392] StartCluster: {Name:embed-certs-175663 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-175663 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 13:28:55.940303 1097960 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0414 13:28:55.940364 1097960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0414 13:28:55.980550 1097960 cri.go:89] found id: ""
I0414 13:28:55.980677 1097960 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0414 13:28:55.990241 1097960 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0414 13:28:55.999630 1097960 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0414 13:28:55.999789 1097960 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0414 13:28:56.014894 1097960 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0414 13:28:56.014915 1097960 kubeadm.go:157] found existing configuration files:
I0414 13:28:56.015014 1097960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0414 13:28:56.024817 1097960 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0414 13:28:56.024948 1097960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0414 13:28:56.034410 1097960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0414 13:28:56.044058 1097960 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0414 13:28:56.044130 1097960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0414 13:28:56.053246 1097960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0414 13:28:56.062720 1097960 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0414 13:28:56.062810 1097960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0414 13:28:56.071602 1097960 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0414 13:28:56.081028 1097960 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0414 13:28:56.081147 1097960 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0414 13:28:56.090404 1097960 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0414 13:28:56.137288 1097960 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0414 13:28:56.142357 1097960 kubeadm.go:310] [preflight] Running pre-flight checks
I0414 13:28:56.168297 1097960 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0414 13:28:56.168388 1097960 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1081-aws[0m
I0414 13:28:56.168441 1097960 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0414 13:28:56.168490 1097960 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0414 13:28:56.168542 1097960 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0414 13:28:56.168591 1097960 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0414 13:28:56.168642 1097960 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0414 13:28:56.168693 1097960 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0414 13:28:56.168752 1097960 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0414 13:28:56.168802 1097960 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0414 13:28:56.168862 1097960 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0414 13:28:56.168914 1097960 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0414 13:28:56.239172 1097960 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0414 13:28:56.239362 1097960 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0414 13:28:56.239500 1097960 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0414 13:28:56.246280 1097960 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0414 13:28:56.252387 1097960 out.go:235] - Generating certificates and keys ...
I0414 13:28:56.252513 1097960 kubeadm.go:310] [certs] Using existing ca certificate authority
I0414 13:28:56.252626 1097960 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0414 13:28:59.874717 1087820 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I0414 13:28:59.885631 1087820 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I0414 13:28:59.889194 1087820 out.go:201]
W0414 13:28:59.892239 1087820 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0414 13:28:59.892487 1087820 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0414 13:28:59.892544 1087820 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0414 13:28:59.892592 1087820 out.go:270] *
W0414 13:28:59.893569 1087820 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0414 13:28:59.895626 1087820 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
297ef1ed6fcd2 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 6572fb24d5d6a dashboard-metrics-scraper-8d5bb5db8-v9jnb
550ca11262d1e 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 9de034b7932db kubernetes-dashboard-cd95d586-lz9dq
5ec6f96fa7f4f db91994f4ee8f 5 minutes ago Running coredns 1 83e31a4b80c20 coredns-74ff55c5b-5r22q
56d4f42138d71 ba04bb24b9575 5 minutes ago Running storage-provisioner 1 136b9f4a9457b storage-provisioner
c527a092ec73f 1611cd07b61d5 5 minutes ago Running busybox 1 3b49a4ace9b54 busybox
309f2dadeb261 25a5233254979 5 minutes ago Running kube-proxy 1 68ae17b9d29ee kube-proxy-25hcq
ea3a55192a6e7 ee75e27fff91c 5 minutes ago Running kindnet-cni 1 1fd00d3d61ace kindnet-pcfvl
a18f7abb2c8b1 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 e1d771ef532f0 kube-controller-manager-old-k8s-version-208098
b5ce1ba4e0d2b 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 4e736f43ce2c7 kube-apiserver-old-k8s-version-208098
7c661b3c10529 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 97b9b5a874f24 kube-scheduler-old-k8s-version-208098
ca9d7ffd4226f 05b738aa1bc63 5 minutes ago Running etcd 1 58d1a18da53d4 etcd-old-k8s-version-208098
ae133ab56338e 1611cd07b61d5 6 minutes ago Exited busybox 0 e7fe62b28178b busybox
69999d0b12285 db91994f4ee8f 7 minutes ago Exited coredns 0 470b3decbb246 coredns-74ff55c5b-5r22q
2b3e37c6c41c9 ee75e27fff91c 7 minutes ago Exited kindnet-cni 0 071a20f6052a6 kindnet-pcfvl
f96409a601453 ba04bb24b9575 7 minutes ago Exited storage-provisioner 0 bce6ad9ae92c1 storage-provisioner
8c530512fa590 25a5233254979 7 minutes ago Exited kube-proxy 0 7fd487d82f4c1 kube-proxy-25hcq
7f081663b50cf 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 ddc12e584026f kube-apiserver-old-k8s-version-208098
ed0885294debb 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 5ce876598c577 kube-controller-manager-old-k8s-version-208098
5b2cdcf587bd2 05b738aa1bc63 8 minutes ago Exited etcd 0 541f4054c3049 etcd-old-k8s-version-208098
102a4c4d6e727 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 01b733aed5ba8 kube-scheduler-old-k8s-version-208098
==> containerd <==
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.284227183Z" level=info msg="CreateContainer within sandbox \"6572fb24d5d6ac3aaec79ac8fd408613299727cd68960aab302450b2e5fbdd05\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.311054456Z" level=info msg="CreateContainer within sandbox \"6572fb24d5d6ac3aaec79ac8fd408613299727cd68960aab302450b2e5fbdd05\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18\""
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.312072596Z" level=info msg="StartContainer for \"c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18\""
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.384594762Z" level=info msg="StartContainer for \"c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18\" returns successfully"
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.384638635Z" level=info msg="received exit event container_id:\"c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18\" id:\"c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18\" pid:3086 exit_status:255 exited_at:{seconds:1744637117 nanos:383903986}"
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.410957593Z" level=info msg="shim disconnected" id=c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18 namespace=k8s.io
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.411160066Z" level=warning msg="cleaning up after shim disconnected" id=c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18 namespace=k8s.io
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.411227833Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.962321110Z" level=info msg="RemoveContainer for \"b5b6555dcfad896710981cbd02ec383a677c28c4dcf74f576267254ee64155f4\""
Apr 14 13:25:17 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:25:17.971437589Z" level=info msg="RemoveContainer for \"b5b6555dcfad896710981cbd02ec383a677c28c4dcf74f576267254ee64155f4\" returns successfully"
Apr 14 13:26:15 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:15.284715261Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:26:15 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:15.295234878Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Apr 14 13:26:15 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:15.298025049Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Apr 14 13:26:15 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:15.298073378Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.283271978Z" level=info msg="CreateContainer within sandbox \"6572fb24d5d6ac3aaec79ac8fd408613299727cd68960aab302450b2e5fbdd05\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.305492677Z" level=info msg="CreateContainer within sandbox \"6572fb24d5d6ac3aaec79ac8fd408613299727cd68960aab302450b2e5fbdd05\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8\""
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.308422920Z" level=info msg="StartContainer for \"297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8\""
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.383121568Z" level=info msg="StartContainer for \"297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8\" returns successfully"
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.383805721Z" level=info msg="received exit event container_id:\"297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8\" id:\"297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8\" pid:3327 exit_status:255 exited_at:{seconds:1744637199 nanos:382717582}"
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.417537961Z" level=info msg="shim disconnected" id=297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8 namespace=k8s.io
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.417585658Z" level=warning msg="cleaning up after shim disconnected" id=297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8 namespace=k8s.io
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.417665372Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 14 13:26:39 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:39.431608827Z" level=warning msg="cleanup warnings time=\"2025-04-14T13:26:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Apr 14 13:26:40 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:40.193052643Z" level=info msg="RemoveContainer for \"c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18\""
Apr 14 13:26:40 old-k8s-version-208098 containerd[569]: time="2025-04-14T13:26:40.199833011Z" level=info msg="RemoveContainer for \"c6aef6433dc5a4a36ad45a836297fdf24517d827673e76c3e19577591a327c18\" returns successfully"
==> coredns [5ec6f96fa7f4f2dd7f70aa3320ad033a121449a1440a3a029f0cfcf66ac143c2] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:58150 - 20170 "HINFO IN 6392052141215464215.3843717182669684263. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006479109s
==> coredns [69999d0b12285ca5ed8e9aa71b282709eb298dacfdbd0acfe3357849f0c9b652] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:55158 - 14536 "HINFO IN 4401423734062181124.357673158104507790. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006921954s
==> describe nodes <==
Name: old-k8s-version-208098
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-208098
kubernetes.io/os=linux
minikube.k8s.io/commit=9d10b41d083ee2064b1b8c7e16503e13b1847696
minikube.k8s.io/name=old-k8s-version-208098
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_14T13_20_46_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 14 Apr 2025 13:20:43 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-208098
AcquireTime: <unset>
RenewTime: Mon, 14 Apr 2025 13:28:57 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 14 Apr 2025 13:24:14 +0000 Mon, 14 Apr 2025 13:20:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 14 Apr 2025 13:24:14 +0000 Mon, 14 Apr 2025 13:20:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 14 Apr 2025 13:24:14 +0000 Mon, 14 Apr 2025 13:20:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 14 Apr 2025 13:24:14 +0000 Mon, 14 Apr 2025 13:21:01 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-208098
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 1859e78d63c144afbbc55e871f7d03e2
System UUID: dc169f84-cd42-4cad-bf7b-fc6637fb4bf5
Boot ID: ddac89d2-bbb0-4874-b60d-8ba587325d08
Kernel Version: 5.15.0-1081-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.27
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m37s
kube-system coredns-74ff55c5b-5r22q 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m1s
kube-system etcd-old-k8s-version-208098 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m7s
kube-system kindnet-pcfvl 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m1s
kube-system kube-apiserver-old-k8s-version-208098 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m7s
kube-system kube-controller-manager-old-k8s-version-208098 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m7s
kube-system kube-proxy-25hcq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m1s
kube-system kube-scheduler-old-k8s-version-208098 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m7s
kube-system metrics-server-9975d5f86-6zb8s 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m25s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m59s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-v9jnb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m29s
kubernetes-dashboard kubernetes-dashboard-cd95d586-lz9dq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m29s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 8m28s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m27s (x4 over 8m27s) kubelet Node old-k8s-version-208098 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m27s (x5 over 8m27s) kubelet Node old-k8s-version-208098 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m27s (x4 over 8m27s) kubelet Node old-k8s-version-208098 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m27s kubelet Updated Node Allocatable limit across pods
Normal Starting 8m8s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m8s kubelet Node old-k8s-version-208098 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m8s kubelet Node old-k8s-version-208098 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m8s kubelet Node old-k8s-version-208098 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m8s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m1s kubelet Node old-k8s-version-208098 status is now: NodeReady
Normal Starting 8m kube-proxy Starting kube-proxy.
Normal Starting 5m57s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m57s (x8 over 5m57s) kubelet Node old-k8s-version-208098 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m57s (x8 over 5m57s) kubelet Node old-k8s-version-208098 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m57s (x7 over 5m57s) kubelet Node old-k8s-version-208098 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m57s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m45s kube-proxy Starting kube-proxy.
==> dmesg <==
[Apr14 12:10] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
==> etcd [5b2cdcf587bd296e923af2e771ddd90360f1a45d35ee4e134216df826375aa87] <==
raft2025/04/14 13:20:36 INFO: 9f0758e1c58a86ed is starting a new election at term 1
raft2025/04/14 13:20:36 INFO: 9f0758e1c58a86ed became candidate at term 2
raft2025/04/14 13:20:36 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
raft2025/04/14 13:20:36 INFO: 9f0758e1c58a86ed became leader at term 2
raft2025/04/14 13:20:36 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2025-04-14 13:20:36.609225 I | etcdserver: setting up the initial cluster version to 3.4
2025-04-14 13:20:36.610426 N | etcdserver/membership: set the initial cluster version to 3.4
2025-04-14 13:20:36.610610 I | etcdserver/api: enabled capabilities for version 3.4
2025-04-14 13:20:36.610747 I | etcdserver: published {Name:old-k8s-version-208098 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2025-04-14 13:20:36.610835 I | embed: ready to serve client requests
2025-04-14 13:20:36.612259 I | embed: serving client requests on 127.0.0.1:2379
2025-04-14 13:20:36.615574 I | embed: ready to serve client requests
2025-04-14 13:20:36.617894 I | embed: serving client requests on 192.168.85.2:2379
2025-04-14 13:20:56.230037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:20:57.401840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:21:07.399374 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:21:17.399508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:21:27.399231 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:21:37.399312 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:21:47.399248 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:21:57.399275 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:22:07.399753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:22:17.399906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:22:27.399355 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:22:37.399803 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [ca9d7ffd4226f4f9f232dd750c086ce59649d5da83b202717d8ceb43ecda44aa] <==
2025-04-14 13:24:56.489076 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:25:06.489559 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:25:16.489072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:25:26.488967 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:25:36.488966 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:25:46.488934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:25:56.488951 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:26:06.488922 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:26:16.488943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:26:26.488840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:26:36.489055 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:26:46.489016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:26:56.489063 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:27:06.488941 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:27:16.489070 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:27:26.488966 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:27:36.488897 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:27:46.488877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:27:56.488979 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:28:06.489983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:28:16.488976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:28:26.489023 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:28:36.490428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:28:46.488950 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 13:28:56.488981 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
13:29:02 up 5:11, 0 users, load average: 1.61, 2.31, 2.87
Linux old-k8s-version-208098 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [2b3e37c6c41c91fab9471a24b27a362583cf7914b8e89eaa2821adcb32615832] <==
I0414 13:21:04.724922 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I0414 13:21:05.123015 1 controller.go:361] Starting controller kube-network-policies
I0414 13:21:05.123040 1 controller.go:365] Waiting for informer caches to sync
I0414 13:21:05.123046 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0414 13:21:05.323197 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0414 13:21:05.323229 1 metrics.go:61] Registering metrics
I0414 13:21:05.323433 1 controller.go:401] Syncing nftables rules
I0414 13:21:15.129971 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:21:15.130012 1 main.go:301] handling current node
I0414 13:21:25.123101 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:21:25.123145 1 main.go:301] handling current node
I0414 13:21:35.129764 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:21:35.129801 1 main.go:301] handling current node
I0414 13:21:45.130797 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:21:45.130898 1 main.go:301] handling current node
I0414 13:21:55.123120 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:21:55.123156 1 main.go:301] handling current node
I0414 13:22:05.122981 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:22:05.123015 1 main.go:301] handling current node
I0414 13:22:15.125704 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:22:15.125753 1 main.go:301] handling current node
I0414 13:22:25.130802 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:22:25.130851 1 main.go:301] handling current node
I0414 13:22:35.126552 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:22:35.126595 1 main.go:301] handling current node
==> kindnet [ea3a55192a6e743b2b58c9241eb7fae87477381a00b01c16fe0f2344869de110] <==
I0414 13:26:57.867800 1 main.go:301] handling current node
I0414 13:27:07.868351 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:27:07.868388 1 main.go:301] handling current node
I0414 13:27:17.861721 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:27:17.861755 1 main.go:301] handling current node
I0414 13:27:27.865698 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:27:27.865732 1 main.go:301] handling current node
I0414 13:27:37.870539 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:27:37.870577 1 main.go:301] handling current node
I0414 13:27:47.870541 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:27:47.870575 1 main.go:301] handling current node
I0414 13:27:57.867964 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:27:57.868004 1 main.go:301] handling current node
I0414 13:28:07.869949 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:28:07.869985 1 main.go:301] handling current node
I0414 13:28:17.861309 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:28:17.861347 1 main.go:301] handling current node
I0414 13:28:27.867069 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:28:27.867107 1 main.go:301] handling current node
I0414 13:28:37.869813 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:28:37.869845 1 main.go:301] handling current node
I0414 13:28:47.869705 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:28:47.869739 1 main.go:301] handling current node
I0414 13:28:57.867821 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I0414 13:28:57.867861 1 main.go:301] handling current node
==> kube-apiserver [7f081663b50cf66de434e117acd579f5c188999bcc5d110ea9e9c34507e17717] <==
I0414 13:20:43.795947 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0414 13:20:43.796155 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0414 13:20:43.806440 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0414 13:20:43.812334 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0414 13:20:43.812363 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0414 13:20:44.344124 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0414 13:20:44.392405 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0414 13:20:44.505376 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
I0414 13:20:44.506721 1 controller.go:606] quota admission added evaluator for: endpoints
I0414 13:20:44.513066 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0414 13:20:44.844873 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0414 13:20:45.520423 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0414 13:20:46.157745 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0414 13:20:46.233540 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0414 13:21:01.507401 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0414 13:21:01.587727 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0414 13:21:13.965509 1 client.go:360] parsed scheme: "passthrough"
I0414 13:21:13.965555 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:21:13.965564 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 13:21:49.784104 1 client.go:360] parsed scheme: "passthrough"
I0414 13:21:49.784156 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:21:49.784166 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 13:22:26.286260 1 client.go:360] parsed scheme: "passthrough"
I0414 13:22:26.286302 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:22:26.286310 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [b5ce1ba4e0d2bcd7515ef6fe14737d7e4559723d2ec72d36e8c5cf4408144e8e] <==
I0414 13:25:44.720308 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:25:44.720316 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0414 13:26:17.913468 1 handler_proxy.go:102] no RequestInfo found in the context
E0414 13:26:17.913739 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0414 13:26:17.913759 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0414 13:26:25.143622 1 client.go:360] parsed scheme: "passthrough"
I0414 13:26:25.143683 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:26:25.143717 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 13:27:05.989161 1 client.go:360] parsed scheme: "passthrough"
I0414 13:27:05.989210 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:27:05.989219 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 13:27:43.273932 1 client.go:360] parsed scheme: "passthrough"
I0414 13:27:43.273988 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:27:43.273999 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0414 13:28:15.436812 1 handler_proxy.go:102] no RequestInfo found in the context
E0414 13:28:15.437044 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0414 13:28:15.437061 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0414 13:28:22.601554 1 client.go:360] parsed scheme: "passthrough"
I0414 13:28:22.601596 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:28:22.601660 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 13:29:02.813249 1 client.go:360] parsed scheme: "passthrough"
I0414 13:29:02.813306 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 13:29:02.813315 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [a18f7abb2c8b1bd0d2384485187d46acb39ebf0da04ea371f3421476d087c150] <==
W0414 13:24:40.136300 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 13:25:04.871784 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 13:25:11.786835 1 request.go:655] Throttling request took 1.048330944s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0414 13:25:12.638375 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 13:25:35.373875 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 13:25:44.289044 1 request.go:655] Throttling request took 1.048383313s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0414 13:25:45.227712 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 13:26:05.875652 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 13:26:16.878018 1 request.go:655] Throttling request took 1.048237246s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
W0414 13:26:17.729755 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 13:26:36.377987 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 13:26:49.380282 1 request.go:655] Throttling request took 1.04840799s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0414 13:26:50.231829 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 13:27:06.879902 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 13:27:21.884301 1 request.go:655] Throttling request took 1.048409761s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta1?timeout=32s
W0414 13:27:22.735644 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 13:27:37.381887 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 13:27:54.386303 1 request.go:655] Throttling request took 1.048125568s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0414 13:27:55.238042 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 13:28:07.883863 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 13:28:26.888767 1 request.go:655] Throttling request took 1.04845218s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0414 13:28:27.740191 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 13:28:38.387625 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 13:28:59.390656 1 request.go:655] Throttling request took 1.046251718s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W0414 13:29:00.247711 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [ed0885294debbd3dcb12d3e56da4a8d57aea5d123f43918d2d175c07ebde31aa] <==
I0414 13:21:01.529774 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0414 13:21:01.538664 1 shared_informer.go:247] Caches are synced for taint
I0414 13:21:01.538767 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0414 13:21:01.538862 1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-208098. Assuming now as a timestamp.
I0414 13:21:01.538943 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0414 13:21:01.541016 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0414 13:21:01.544469 1 event.go:291] "Event occurred" object="old-k8s-version-208098" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-208098 event: Registered Node old-k8s-version-208098 in Controller"
I0414 13:21:01.548610 1 range_allocator.go:373] Set node old-k8s-version-208098 PodCIDR to [10.244.0.0/24]
I0414 13:21:01.572040 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-snpkj"
I0414 13:21:01.573403 1 shared_informer.go:247] Caches are synced for daemon sets
I0414 13:21:01.612964 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5r22q"
I0414 13:21:01.619093 1 shared_informer.go:247] Caches are synced for job
I0414 13:21:01.681805 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pcfvl"
I0414 13:21:01.681850 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-25hcq"
I0414 13:21:01.742675 1 shared_informer.go:247] Caches are synced for resource quota
I0414 13:21:01.848892 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0414 13:21:01.897460 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0414 13:21:01.897495 1 shared_informer.go:247] Caches are synced for resource quota
I0414 13:21:02.096246 1 shared_informer.go:247] Caches are synced for garbage collector
I0414 13:21:02.096271 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0414 13:21:02.149081 1 shared_informer.go:247] Caches are synced for garbage collector
I0414 13:21:03.273919 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0414 13:21:03.307313 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-snpkj"
I0414 13:21:06.539223 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0414 13:22:36.623397 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
==> kube-proxy [309f2dadeb261d081e5e59a49c92e550e85611e158c5533bce0c9563f7b1827f] <==
I0414 13:23:17.861002 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0414 13:23:17.861070 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0414 13:23:17.935706 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0414 13:23:17.935800 1 server_others.go:185] Using iptables Proxier.
I0414 13:23:17.936025 1 server.go:650] Version: v1.20.0
I0414 13:23:17.936822 1 config.go:315] Starting service config controller
I0414 13:23:17.936843 1 shared_informer.go:240] Waiting for caches to sync for service config
I0414 13:23:17.937774 1 config.go:224] Starting endpoint slice config controller
I0414 13:23:17.937783 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0414 13:23:18.039981 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0414 13:23:18.040057 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [8c530512fa590abd742024cd0d58461482ab3e041f8ccf26fd496da64e3b258a] <==
I0414 13:21:02.541178 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I0414 13:21:02.541299 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W0414 13:21:02.602141 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0414 13:21:02.602279 1 server_others.go:185] Using iptables Proxier.
I0414 13:21:02.602567 1 server.go:650] Version: v1.20.0
I0414 13:21:02.603128 1 config.go:315] Starting service config controller
I0414 13:21:02.603151 1 shared_informer.go:240] Waiting for caches to sync for service config
I0414 13:21:02.603855 1 config.go:224] Starting endpoint slice config controller
I0414 13:21:02.603906 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0414 13:21:02.703225 1 shared_informer.go:247] Caches are synced for service config
I0414 13:21:02.704004 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [102a4c4d6e7279905740dc48568e7bf1503fd52aec9357a5908b7d243a401588] <==
I0414 13:20:36.973180 1 serving.go:331] Generated self-signed cert in-memory
W0414 13:20:43.034516 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0414 13:20:43.034669 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0414 13:20:43.034729 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0414 13:20:43.034753 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0414 13:20:43.107718 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0414 13:20:43.107908 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 13:20:43.107931 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 13:20:43.109873 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0414 13:20:43.139303 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0414 13:20:43.140837 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0414 13:20:43.141653 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0414 13:20:43.143042 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0414 13:20:43.144400 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0414 13:20:43.145841 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0414 13:20:43.146416 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0414 13:20:43.148067 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0414 13:20:43.150591 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0414 13:20:43.151004 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0414 13:20:43.151223 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0414 13:20:43.151305 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0414 13:20:44.014876 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0414 13:20:44.710026 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [7c661b3c10529c5d6b37dc64af63918cb39a099aacf45ffb9d1289ec2c8e848f] <==
I0414 13:23:08.376392 1 serving.go:331] Generated self-signed cert in-memory
W0414 13:23:14.388235 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0414 13:23:14.388260 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0414 13:23:14.388271 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0414 13:23:14.388277 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0414 13:23:14.563167 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 13:23:14.563195 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 13:23:14.565683 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0414 13:23:14.565733 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0414 13:23:14.663384 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 14 13:27:15 old-k8s-version-208098 kubelet[662]: E0414 13:27:15.281925 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:27:23 old-k8s-version-208098 kubelet[662]: E0414 13:27:23.281150 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:27:28 old-k8s-version-208098 kubelet[662]: I0414 13:27:28.280464 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8
Apr 14 13:27:28 old-k8s-version-208098 kubelet[662]: E0414 13:27:28.280821 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:27:38 old-k8s-version-208098 kubelet[662]: E0414 13:27:38.281280 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:27:42 old-k8s-version-208098 kubelet[662]: I0414 13:27:42.280597 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8
Apr 14 13:27:42 old-k8s-version-208098 kubelet[662]: E0414 13:27:42.281001 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:27:49 old-k8s-version-208098 kubelet[662]: E0414 13:27:49.281897 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:27:54 old-k8s-version-208098 kubelet[662]: I0414 13:27:54.280450 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8
Apr 14 13:27:54 old-k8s-version-208098 kubelet[662]: E0414 13:27:54.280822 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:04 old-k8s-version-208098 kubelet[662]: E0414 13:28:04.281150 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:07 old-k8s-version-208098 kubelet[662]: I0414 13:28:07.280643 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8
Apr 14 13:28:07 old-k8s-version-208098 kubelet[662]: E0414 13:28:07.281556 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:15 old-k8s-version-208098 kubelet[662]: E0414 13:28:15.282302 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: I0414 13:28:22.280439 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8
Apr 14 13:28:22 old-k8s-version-208098 kubelet[662]: E0414 13:28:22.281274 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:27 old-k8s-version-208098 kubelet[662]: E0414 13:28:27.281688 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: I0414 13:28:33.280604 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8
Apr 14 13:28:33 old-k8s-version-208098 kubelet[662]: E0414 13:28:33.281440 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:40 old-k8s-version-208098 kubelet[662]: E0414 13:28:40.281074 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:48 old-k8s-version-208098 kubelet[662]: I0414 13:28:48.280438 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8
Apr 14 13:28:48 old-k8s-version-208098 kubelet[662]: E0414 13:28:48.295936 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
Apr 14 13:28:53 old-k8s-version-208098 kubelet[662]: E0414 13:28:53.288382 662 pod_workers.go:191] Error syncing pod d96c5ecb-7efe-4efc-be29-1ec07df8bfca ("metrics-server-9975d5f86-6zb8s_kube-system(d96c5ecb-7efe-4efc-be29-1ec07df8bfca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 13:28:59 old-k8s-version-208098 kubelet[662]: I0414 13:28:59.280499 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 297ef1ed6fcd2fd5532ef2100c31cfda172807c93f88480907c09ac47ba2d7c8
Apr 14 13:28:59 old-k8s-version-208098 kubelet[662]: E0414 13:28:59.280835 662 pod_workers.go:191] Error syncing pod 427d7aa9-266b-40eb-bda8-d54676612a65 ("dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-v9jnb_kubernetes-dashboard(427d7aa9-266b-40eb-bda8-d54676612a65)"
==> kubernetes-dashboard [550ca11262d1e4f44201062a09444275eb2dea1a2982db1e65a7e4ba1e157e9c] <==
2025/04/14 13:23:38 Using namespace: kubernetes-dashboard
2025/04/14 13:23:38 Using in-cluster config to connect to apiserver
2025/04/14 13:23:38 Using secret token for csrf signing
2025/04/14 13:23:38 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/04/14 13:23:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/04/14 13:23:39 Successful initial request to the apiserver, version: v1.20.0
2025/04/14 13:23:39 Generating JWE encryption key
2025/04/14 13:23:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/04/14 13:23:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/04/14 13:23:39 Initializing JWE encryption key from synchronized object
2025/04/14 13:23:39 Creating in-cluster Sidecar client
2025/04/14 13:23:39 Serving insecurely on HTTP port: 9090
2025/04/14 13:23:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:24:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:24:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:25:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:25:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:26:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:26:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:27:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:27:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:28:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:28:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 13:23:38 Starting overwatch
==> storage-provisioner [56d4f42138d716ba129bcceaa996f24a0e8889bbe97d06c2cc65482bb7332e1c] <==
I0414 13:23:18.365137 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0414 13:23:18.435327 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0414 13:23:18.435424 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0414 13:23:35.906199 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0414 13:23:35.906375 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-208098_ffe86fbb-0c20-41f6-9be1-b75df238f23f!
I0414 13:23:35.920955 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f4b6e622-c666-4deb-8b05-e60ee1f5e011", APIVersion:"v1", ResourceVersion:"775", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-208098_ffe86fbb-0c20-41f6-9be1-b75df238f23f became leader
I0414 13:23:36.007384 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-208098_ffe86fbb-0c20-41f6-9be1-b75df238f23f!
==> storage-provisioner [f96409a601453318fc9bb498faaaef87e9321bbfa13cf35e386b5f0261256cb6] <==
I0414 13:21:03.722010 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0414 13:21:03.744045 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0414 13:21:03.744105 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0414 13:21:03.755581 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0414 13:21:03.757145 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f4b6e622-c666-4deb-8b05-e60ee1f5e011", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-208098_c4c3fc32-55a9-4d3e-82d5-5965181df85c became leader
I0414 13:21:03.759538 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-208098_c4c3fc32-55a9-4d3e-82d5-5965181df85c!
I0414 13:21:03.860122 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-208098_c4c3fc32-55a9-4d3e-82d5-5965181df85c!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-208098 -n old-k8s-version-208098
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-208098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-6zb8s
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-208098 describe pod metrics-server-9975d5f86-6zb8s
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-208098 describe pod metrics-server-9975d5f86-6zb8s: exit status 1 (111.585939ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-6zb8s" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-208098 describe pod metrics-server-9975d5f86-6zb8s: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.66s)