=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-943255 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-943255 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m16.350818018s)
-- stdout --
* [old-k8s-version-943255] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20534
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20534-594855/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-594855/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-943255" primary control-plane node in "old-k8s-version-943255" cluster
* Pulling base image v0.0.46-1744107393-20604 ...
* Restarting existing docker container for "old-k8s-version-943255" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
* Verifying Kubernetes components...
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-943255 addons enable metrics-server
* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
-- /stdout --
** stderr **
I0414 11:38:43.707239 808013 out.go:345] Setting OutFile to fd 1 ...
I0414 11:38:43.707468 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:38:43.707496 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:38:43.707513 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:38:43.707797 808013 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-594855/.minikube/bin
I0414 11:38:43.708196 808013 out.go:352] Setting JSON to false
I0414 11:38:43.709228 808013 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":12069,"bootTime":1744618655,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0414 11:38:43.709320 808013 start.go:139] virtualization:
I0414 11:38:43.712576 808013 out.go:177] * [old-k8s-version-943255] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0414 11:38:43.716642 808013 out.go:177] - MINIKUBE_LOCATION=20534
I0414 11:38:43.716697 808013 notify.go:220] Checking for updates...
I0414 11:38:43.723236 808013 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0414 11:38:43.726030 808013 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20534-594855/kubeconfig
I0414 11:38:43.728825 808013 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-594855/.minikube
I0414 11:38:43.731599 808013 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0414 11:38:43.734433 808013 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0414 11:38:43.737879 808013 config.go:182] Loaded profile config "old-k8s-version-943255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0414 11:38:43.741239 808013 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0414 11:38:43.743978 808013 driver.go:394] Setting default libvirt URI to qemu:///system
I0414 11:38:43.780892 808013 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0414 11:38:43.780999 808013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0414 11:38:43.867987 808013 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:60 SystemTime:2025-04-14 11:38:43.858925811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0414 11:38:43.868087 808013 docker.go:318] overlay module found
I0414 11:38:43.871031 808013 out.go:177] * Using the docker driver based on existing profile
I0414 11:38:43.873760 808013 start.go:297] selected driver: docker
I0414 11:38:43.873813 808013 start.go:901] validating driver "docker" against &{Name:old-k8s-version-943255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-943255 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 11:38:43.873917 808013 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0414 11:38:43.874636 808013 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0414 11:38:43.961549 808013 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:60 SystemTime:2025-04-14 11:38:43.952673268 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0414 11:38:43.961947 808013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 11:38:43.961976 808013 cni.go:84] Creating CNI manager for ""
I0414 11:38:43.962022 808013 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 11:38:43.962054 808013 start.go:340] cluster config:
{Name:old-k8s-version-943255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-943255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 11:38:43.965094 808013 out.go:177] * Starting "old-k8s-version-943255" primary control-plane node in "old-k8s-version-943255" cluster
I0414 11:38:43.967808 808013 cache.go:121] Beginning downloading kic base image for docker with containerd
I0414 11:38:43.971808 808013 out.go:177] * Pulling base image v0.0.46-1744107393-20604 ...
I0414 11:38:43.974238 808013 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0414 11:38:43.974288 808013 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-594855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0414 11:38:43.974309 808013 cache.go:56] Caching tarball of preloaded images
I0414 11:38:43.974404 808013 preload.go:172] Found /home/jenkins/minikube-integration/20534-594855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0414 11:38:43.974414 808013 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0414 11:38:43.974522 808013 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/config.json ...
I0414 11:38:43.974747 808013 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
I0414 11:38:43.998898 808013 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon, skipping pull
I0414 11:38:43.998917 808013 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in daemon, skipping load
I0414 11:38:43.998930 808013 cache.go:230] Successfully downloaded all kic artifacts
I0414 11:38:43.998952 808013 start.go:360] acquireMachinesLock for old-k8s-version-943255: {Name:mk889cffa6536044df3c10ded0e32a25d6db23c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 11:38:43.999003 808013 start.go:364] duration metric: took 33.962µs to acquireMachinesLock for "old-k8s-version-943255"
I0414 11:38:43.999022 808013 start.go:96] Skipping create...Using existing machine configuration
I0414 11:38:43.999027 808013 fix.go:54] fixHost starting:
I0414 11:38:43.999279 808013 cli_runner.go:164] Run: docker container inspect old-k8s-version-943255 --format={{.State.Status}}
I0414 11:38:44.018206 808013 fix.go:112] recreateIfNeeded on old-k8s-version-943255: state=Stopped err=<nil>
W0414 11:38:44.018241 808013 fix.go:138] unexpected machine state, will restart: <nil>
I0414 11:38:44.021097 808013 out.go:177] * Restarting existing docker container for "old-k8s-version-943255" ...
I0414 11:38:44.023691 808013 cli_runner.go:164] Run: docker start old-k8s-version-943255
I0414 11:38:44.336238 808013 cli_runner.go:164] Run: docker container inspect old-k8s-version-943255 --format={{.State.Status}}
I0414 11:38:44.358123 808013 kic.go:430] container "old-k8s-version-943255" state is running.
I0414 11:38:44.358522 808013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-943255
I0414 11:38:44.381291 808013 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/config.json ...
I0414 11:38:44.381521 808013 machine.go:93] provisionDockerMachine start ...
I0414 11:38:44.381581 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:44.404474 808013 main.go:141] libmachine: Using SSH client type: native
I0414 11:38:44.404798 808013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33806 <nil> <nil>}
I0414 11:38:44.404808 808013 main.go:141] libmachine: About to run SSH command:
hostname
I0414 11:38:44.405391 808013 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48646->127.0.0.1:33806: read: connection reset by peer
I0414 11:38:47.545443 808013 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-943255
I0414 11:38:47.545524 808013 ubuntu.go:169] provisioning hostname "old-k8s-version-943255"
I0414 11:38:47.545616 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:47.581365 808013 main.go:141] libmachine: Using SSH client type: native
I0414 11:38:47.581674 808013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33806 <nil> <nil>}
I0414 11:38:47.581686 808013 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-943255 && echo "old-k8s-version-943255" | sudo tee /etc/hostname
I0414 11:38:47.745399 808013 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-943255
I0414 11:38:47.745474 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:47.799658 808013 main.go:141] libmachine: Using SSH client type: native
I0414 11:38:47.799976 808013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33806 <nil> <nil>}
I0414 11:38:47.799993 808013 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-943255' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-943255/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-943255' | sudo tee -a /etc/hosts;
fi
fi
I0414 11:38:47.934164 808013 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 11:38:47.934238 808013 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20534-594855/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-594855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-594855/.minikube}
I0414 11:38:47.934278 808013 ubuntu.go:177] setting up certificates
I0414 11:38:47.934315 808013 provision.go:84] configureAuth start
I0414 11:38:47.934419 808013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-943255
I0414 11:38:47.964685 808013 provision.go:143] copyHostCerts
I0414 11:38:47.964763 808013 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-594855/.minikube/ca.pem, removing ...
I0414 11:38:47.964780 808013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-594855/.minikube/ca.pem
I0414 11:38:47.964858 808013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-594855/.minikube/ca.pem (1082 bytes)
I0414 11:38:47.964951 808013 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-594855/.minikube/cert.pem, removing ...
I0414 11:38:47.964956 808013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-594855/.minikube/cert.pem
I0414 11:38:47.964988 808013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-594855/.minikube/cert.pem (1123 bytes)
I0414 11:38:47.965060 808013 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-594855/.minikube/key.pem, removing ...
I0414 11:38:47.965065 808013 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-594855/.minikube/key.pem
I0414 11:38:47.965092 808013 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-594855/.minikube/key.pem (1675 bytes)
I0414 11:38:47.965138 808013 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-594855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-943255 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-943255]
I0414 11:38:48.506998 808013 provision.go:177] copyRemoteCerts
I0414 11:38:48.507121 808013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 11:38:48.507200 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:48.524608 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:48.627084 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 11:38:48.672001 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0414 11:38:48.717623 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0414 11:38:48.750298 808013 provision.go:87] duration metric: took 815.949677ms to configureAuth
I0414 11:38:48.750321 808013 ubuntu.go:193] setting minikube options for container-runtime
I0414 11:38:48.750524 808013 config.go:182] Loaded profile config "old-k8s-version-943255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0414 11:38:48.750532 808013 machine.go:96] duration metric: took 4.369003929s to provisionDockerMachine
I0414 11:38:48.750539 808013 start.go:293] postStartSetup for "old-k8s-version-943255" (driver="docker")
I0414 11:38:48.750549 808013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 11:38:48.750602 808013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 11:38:48.750644 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:48.776250 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:48.878385 808013 ssh_runner.go:195] Run: cat /etc/os-release
I0414 11:38:48.884244 808013 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0414 11:38:48.884277 808013 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0414 11:38:48.884294 808013 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0414 11:38:48.884304 808013 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0414 11:38:48.884318 808013 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-594855/.minikube/addons for local assets ...
I0414 11:38:48.884372 808013 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-594855/.minikube/files for local assets ...
I0414 11:38:48.884448 808013 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-594855/.minikube/files/etc/ssl/certs/6002272.pem -> 6002272.pem in /etc/ssl/certs
I0414 11:38:48.884553 808013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 11:38:48.899260 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/files/etc/ssl/certs/6002272.pem --> /etc/ssl/certs/6002272.pem (1708 bytes)
I0414 11:38:48.936743 808013 start.go:296] duration metric: took 186.188227ms for postStartSetup
I0414 11:38:48.936886 808013 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0414 11:38:48.936971 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:48.964636 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:49.056170 808013 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0414 11:38:49.064143 808013 fix.go:56] duration metric: took 5.065108671s for fixHost
I0414 11:38:49.064165 808013 start.go:83] releasing machines lock for "old-k8s-version-943255", held for 5.065154029s
I0414 11:38:49.064250 808013 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-943255
I0414 11:38:49.091487 808013 ssh_runner.go:195] Run: cat /version.json
I0414 11:38:49.091549 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:49.091799 808013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 11:38:49.091856 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:49.127034 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:49.130074 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:49.217796 808013 ssh_runner.go:195] Run: systemctl --version
I0414 11:38:49.381925 808013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0414 11:38:49.390455 808013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0414 11:38:49.429304 808013 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0414 11:38:49.429454 808013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 11:38:49.439536 808013 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0414 11:38:49.439614 808013 start.go:495] detecting cgroup driver to use...
I0414 11:38:49.439657 808013 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0414 11:38:49.439733 808013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 11:38:49.462152 808013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 11:38:49.479349 808013 docker.go:217] disabling cri-docker service (if available) ...
I0414 11:38:49.479467 808013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 11:38:49.499383 808013 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 11:38:49.519787 808013 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 11:38:49.653957 808013 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 11:38:49.782260 808013 docker.go:233] disabling docker service ...
I0414 11:38:49.782371 808013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 11:38:49.796366 808013 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 11:38:49.808785 808013 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 11:38:49.948626 808013 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 11:38:50.068109 808013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 11:38:50.088553 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 11:38:50.114874 808013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0414 11:38:50.125255 808013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 11:38:50.151454 808013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 11:38:50.151603 808013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 11:38:50.164676 808013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 11:38:50.176874 808013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 11:38:50.187516 808013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 11:38:50.198057 808013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 11:38:50.208143 808013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 11:38:50.224610 808013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 11:38:50.238438 808013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 11:38:50.249209 808013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 11:38:50.395382 808013 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 11:38:50.687166 808013 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 11:38:50.687325 808013 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 11:38:50.691768 808013 start.go:563] Will wait 60s for crictl version
I0414 11:38:50.691902 808013 ssh_runner.go:195] Run: which crictl
I0414 11:38:50.696254 808013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 11:38:50.817039 808013 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0414 11:38:50.817187 808013 ssh_runner.go:195] Run: containerd --version
I0414 11:38:50.856386 808013 ssh_runner.go:195] Run: containerd --version
I0414 11:38:50.901268 808013 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
I0414 11:38:50.904360 808013 cli_runner.go:164] Run: docker network inspect old-k8s-version-943255 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0414 11:38:50.934076 808013 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0414 11:38:50.938207 808013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 11:38:50.950409 808013 kubeadm.go:883] updating cluster {Name:old-k8s-version-943255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-943255 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0414 11:38:50.950520 808013 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0414 11:38:50.950589 808013 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 11:38:51.001200 808013 containerd.go:627] all images are preloaded for containerd runtime.
I0414 11:38:51.001220 808013 containerd.go:534] Images already preloaded, skipping extraction
I0414 11:38:51.001281 808013 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 11:38:51.043348 808013 containerd.go:627] all images are preloaded for containerd runtime.
I0414 11:38:51.043420 808013 cache_images.go:84] Images are preloaded, skipping loading
I0414 11:38:51.043444 808013 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0414 11:38:51.043629 808013 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-943255 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-943255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 11:38:51.043746 808013 ssh_runner.go:195] Run: sudo crictl info
I0414 11:38:51.110067 808013 cni.go:84] Creating CNI manager for ""
I0414 11:38:51.110143 808013 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 11:38:51.110177 808013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0414 11:38:51.110229 808013 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-943255 NodeName:old-k8s-version-943255 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0414 11:38:51.110406 808013 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-943255"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0414 11:38:51.110507 808013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0414 11:38:51.123208 808013 binaries.go:44] Found k8s binaries, skipping transfer
I0414 11:38:51.123341 808013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0414 11:38:51.133927 808013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0414 11:38:51.161748 808013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0414 11:38:51.190500 808013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0414 11:38:51.230236 808013 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0414 11:38:51.234137 808013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 11:38:51.245680 808013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 11:38:51.389791 808013 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 11:38:51.409666 808013 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255 for IP: 192.168.76.2
I0414 11:38:51.409732 808013 certs.go:194] generating shared ca certs ...
I0414 11:38:51.409763 808013 certs.go:226] acquiring lock for ca certs: {Name:mkc72929fdde159a4ce614d0ceb68f60716f5790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:38:51.409986 808013 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-594855/.minikube/ca.key
I0414 11:38:51.410066 808013 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-594855/.minikube/proxy-client-ca.key
I0414 11:38:51.410103 808013 certs.go:256] generating profile certs ...
I0414 11:38:51.410237 808013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/client.key
I0414 11:38:51.410359 808013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/apiserver.key.c15fc179
I0414 11:38:51.410447 808013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/proxy-client.key
I0414 11:38:51.410608 808013 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/600227.pem (1338 bytes)
W0414 11:38:51.410687 808013 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-594855/.minikube/certs/600227_empty.pem, impossibly tiny 0 bytes
I0414 11:38:51.410714 808013 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca-key.pem (1675 bytes)
I0414 11:38:51.410779 808013 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem (1082 bytes)
I0414 11:38:51.410848 808013 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/cert.pem (1123 bytes)
I0414 11:38:51.410893 808013 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/key.pem (1675 bytes)
I0414 11:38:51.410972 808013 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/files/etc/ssl/certs/6002272.pem (1708 bytes)
I0414 11:38:51.411790 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 11:38:51.471115 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 11:38:51.517768 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 11:38:51.579245 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 11:38:51.627901 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0414 11:38:51.684070 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0414 11:38:51.711342 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 11:38:51.735419 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/old-k8s-version-943255/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0414 11:38:51.772140 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/certs/600227.pem --> /usr/share/ca-certificates/600227.pem (1338 bytes)
I0414 11:38:51.808313 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/files/etc/ssl/certs/6002272.pem --> /usr/share/ca-certificates/6002272.pem (1708 bytes)
I0414 11:38:51.836778 808013 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 11:38:51.865639 808013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0414 11:38:51.889266 808013 ssh_runner.go:195] Run: openssl version
I0414 11:38:51.895716 808013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/600227.pem && ln -fs /usr/share/ca-certificates/600227.pem /etc/ssl/certs/600227.pem"
I0414 11:38:51.905240 808013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/600227.pem
I0414 11:38:51.909241 808013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 11:01 /usr/share/ca-certificates/600227.pem
I0414 11:38:51.909346 808013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/600227.pem
I0414 11:38:51.916731 808013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/600227.pem /etc/ssl/certs/51391683.0"
I0414 11:38:51.925763 808013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6002272.pem && ln -fs /usr/share/ca-certificates/6002272.pem /etc/ssl/certs/6002272.pem"
I0414 11:38:51.935498 808013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6002272.pem
I0414 11:38:51.939269 808013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 11:01 /usr/share/ca-certificates/6002272.pem
I0414 11:38:51.939334 808013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6002272.pem
I0414 11:38:51.946728 808013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6002272.pem /etc/ssl/certs/3ec20f2e.0"
I0414 11:38:51.955390 808013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 11:38:51.964636 808013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 11:38:51.968731 808013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:53 /usr/share/ca-certificates/minikubeCA.pem
I0414 11:38:51.968844 808013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 11:38:51.976505 808013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 11:38:51.985289 808013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 11:38:51.989321 808013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0414 11:38:51.996457 808013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0414 11:38:52.003525 808013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0414 11:38:52.011100 808013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0414 11:38:52.019192 808013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0414 11:38:52.026986 808013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0414 11:38:52.034651 808013 kubeadm.go:392] StartCluster: {Name:old-k8s-version-943255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-943255 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 11:38:52.034746 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0414 11:38:52.034816 808013 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0414 11:38:52.086022 808013 cri.go:89] found id: "33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:38:52.086054 808013 cri.go:89] found id: "e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:38:52.086059 808013 cri.go:89] found id: "633cdeb29e1cced2a8a92f20652a9a137643be3320a89f7ceee43c47e8e2069f"
I0414 11:38:52.086063 808013 cri.go:89] found id: "13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:38:52.086066 808013 cri.go:89] found id: "dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:38:52.086070 808013 cri.go:89] found id: "461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:38:52.086073 808013 cri.go:89] found id: "d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:38:52.086078 808013 cri.go:89] found id: "f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:38:52.086081 808013 cri.go:89] found id: ""
I0414 11:38:52.086134 808013 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0414 11:38:52.101485 808013 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-04-14T11:38:52Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0414 11:38:52.101623 808013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0414 11:38:52.112426 808013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0414 11:38:52.112510 808013 kubeadm.go:593] restartPrimaryControlPlane start ...
I0414 11:38:52.112592 808013 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0414 11:38:52.121490 808013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0414 11:38:52.122020 808013 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-943255" does not appear in /home/jenkins/minikube-integration/20534-594855/kubeconfig
I0414 11:38:52.122199 808013 kubeconfig.go:62] /home/jenkins/minikube-integration/20534-594855/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-943255" cluster setting kubeconfig missing "old-k8s-version-943255" context setting]
I0414 11:38:52.122530 808013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/kubeconfig: {Name:mk8e574788d73630fd9a80d8b6d7020d3ea20230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:38:52.124053 808013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0414 11:38:52.133871 808013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0414 11:38:52.133942 808013 kubeadm.go:597] duration metric: took 21.413164ms to restartPrimaryControlPlane
I0414 11:38:52.133965 808013 kubeadm.go:394] duration metric: took 99.324317ms to StartCluster
I0414 11:38:52.134007 808013 settings.go:142] acquiring lock: {Name:mk5ddbf5a28031cc72064a243fbaf01b8b1bb102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:38:52.134094 808013 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20534-594855/kubeconfig
I0414 11:38:52.134742 808013 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/kubeconfig: {Name:mk8e574788d73630fd9a80d8b6d7020d3ea20230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:38:52.134995 808013 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 11:38:52.135380 808013 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0414 11:38:52.135451 808013 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-943255"
I0414 11:38:52.135465 808013 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-943255"
W0414 11:38:52.135471 808013 addons.go:247] addon storage-provisioner should already be in state true
I0414 11:38:52.135492 808013 host.go:66] Checking if "old-k8s-version-943255" exists ...
I0414 11:38:52.136138 808013 cli_runner.go:164] Run: docker container inspect old-k8s-version-943255 --format={{.State.Status}}
I0414 11:38:52.136510 808013 config.go:182] Loaded profile config "old-k8s-version-943255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0414 11:38:52.136628 808013 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-943255"
I0414 11:38:52.136657 808013 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-943255"
I0414 11:38:52.136956 808013 cli_runner.go:164] Run: docker container inspect old-k8s-version-943255 --format={{.State.Status}}
I0414 11:38:52.137499 808013 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-943255"
I0414 11:38:52.137531 808013 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-943255"
W0414 11:38:52.137538 808013 addons.go:247] addon metrics-server should already be in state true
I0414 11:38:52.137566 808013 host.go:66] Checking if "old-k8s-version-943255" exists ...
I0414 11:38:52.138008 808013 cli_runner.go:164] Run: docker container inspect old-k8s-version-943255 --format={{.State.Status}}
I0414 11:38:52.141111 808013 addons.go:69] Setting dashboard=true in profile "old-k8s-version-943255"
I0414 11:38:52.141318 808013 addons.go:238] Setting addon dashboard=true in "old-k8s-version-943255"
W0414 11:38:52.141444 808013 addons.go:247] addon dashboard should already be in state true
I0414 11:38:52.141660 808013 host.go:66] Checking if "old-k8s-version-943255" exists ...
I0414 11:38:52.141283 808013 out.go:177] * Verifying Kubernetes components...
I0414 11:38:52.144675 808013 cli_runner.go:164] Run: docker container inspect old-k8s-version-943255 --format={{.State.Status}}
I0414 11:38:52.149125 808013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 11:38:52.183370 808013 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0414 11:38:52.188866 808013 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0414 11:38:52.188903 808013 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0414 11:38:52.188984 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:52.219528 808013 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0414 11:38:52.225569 808013 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0414 11:38:52.225595 808013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0414 11:38:52.225666 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:52.233151 808013 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-943255"
W0414 11:38:52.233173 808013 addons.go:247] addon default-storageclass should already be in state true
I0414 11:38:52.233198 808013 host.go:66] Checking if "old-k8s-version-943255" exists ...
I0414 11:38:52.233604 808013 cli_runner.go:164] Run: docker container inspect old-k8s-version-943255 --format={{.State.Status}}
I0414 11:38:52.241042 808013 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0414 11:38:52.246972 808013 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0414 11:38:52.249761 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0414 11:38:52.249794 808013 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0414 11:38:52.249871 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:52.253971 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:52.277134 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:52.291727 808013 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0414 11:38:52.291753 808013 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0414 11:38:52.291814 808013 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-943255
I0414 11:38:52.312089 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:52.334982 808013 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33806 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/old-k8s-version-943255/id_rsa Username:docker}
I0414 11:38:52.379307 808013 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 11:38:52.428489 808013 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-943255" to be "Ready" ...
I0414 11:38:52.466835 808013 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0414 11:38:52.466904 808013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0414 11:38:52.509398 808013 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0414 11:38:52.509471 808013 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0414 11:38:52.515320 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 11:38:52.529703 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0414 11:38:52.529804 808013 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0414 11:38:52.541805 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0414 11:38:52.595809 808013 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0414 11:38:52.595889 808013 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0414 11:38:52.607001 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0414 11:38:52.607081 808013 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0414 11:38:52.680995 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0414 11:38:52.681071 808013 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0414 11:38:52.736877 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 11:38:52.757687 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0414 11:38:52.757753 808013 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0414 11:38:52.787036 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0414 11:38:52.787108 808013 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W0414 11:38:52.796925 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:52.797015 808013 retry.go:31] will retry after 259.506457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 11:38:52.797070 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:52.797095 808013 retry.go:31] will retry after 286.16206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:52.814322 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0414 11:38:52.814398 808013 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0414 11:38:52.839276 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0414 11:38:52.839351 808013 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0414 11:38:52.863440 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0414 11:38:52.863519 808013 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0414 11:38:52.880553 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:52.880646 808013 retry.go:31] will retry after 163.313455ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:52.884771 808013 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0414 11:38:52.884830 808013 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0414 11:38:52.902148 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 11:38:52.978913 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:52.978957 808013 retry.go:31] will retry after 268.671217ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.045150 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 11:38:53.056867 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0414 11:38:53.084222 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 11:38:53.212647 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.212690 808013 retry.go:31] will retry after 499.358853ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 11:38:53.233608 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.233641 808013 retry.go:31] will retry after 316.801707ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.247946 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 11:38:53.274577 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.274614 808013 retry.go:31] will retry after 494.720987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 11:38:53.377329 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.377365 808013 retry.go:31] will retry after 474.64825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.551169 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 11:38:53.658339 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.658380 808013 retry.go:31] will retry after 732.232568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.712936 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 11:38:53.770329 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 11:38:53.803840 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.803917 808013 retry.go:31] will retry after 801.462029ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.852856 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 11:38:53.853650 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.853676 808013 retry.go:31] will retry after 572.996066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 11:38:53.928514 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:53.928591 808013 retry.go:31] will retry after 609.676233ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:54.391106 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0414 11:38:54.427499 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 11:38:54.428991 808013 node_ready.go:53] error getting node "old-k8s-version-943255": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-943255": dial tcp 192.168.76.2:8443: connect: connection refused
I0414 11:38:54.538614 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 11:38:54.540738 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:54.540807 808013 retry.go:31] will retry after 961.335993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:54.606356 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0414 11:38:54.612355 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:54.612426 808013 retry.go:31] will retry after 663.067721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 11:38:54.726519 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:54.726597 808013 retry.go:31] will retry after 560.1256ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 11:38:54.776315 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:54.776344 808013 retry.go:31] will retry after 904.343794ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:55.276433 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 11:38:55.287671 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 11:38:55.367266 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:55.367304 808013 retry.go:31] will retry after 1.387206086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 11:38:55.387666 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:55.387742 808013 retry.go:31] will retry after 1.190200388s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:55.502381 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 11:38:55.574511 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:55.574542 808013 retry.go:31] will retry after 1.681205788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:55.681719 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0414 11:38:55.759631 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:55.759657 808013 retry.go:31] will retry after 1.491669616s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:56.429068 808013 node_ready.go:53] error getting node "old-k8s-version-943255": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-943255": dial tcp 192.168.76.2:8443: connect: connection refused
I0414 11:38:56.578471 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 11:38:56.667715 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:56.667743 808013 retry.go:31] will retry after 2.059465904s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:56.755082 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0414 11:38:56.832152 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:56.832183 808013 retry.go:31] will retry after 1.569713844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:57.252212 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 11:38:57.256454 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 11:38:57.355004 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:57.355038 808013 retry.go:31] will retry after 2.786778491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0414 11:38:57.365166 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:57.365196 808013 retry.go:31] will retry after 2.006776097s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:58.402085 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 11:38:58.429704 808013 node_ready.go:53] error getting node "old-k8s-version-943255": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-943255": dial tcp 192.168.76.2:8443: connect: connection refused
W0414 11:38:58.481205 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:58.481236 808013 retry.go:31] will retry after 3.642911382s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:58.727952 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0414 11:38:58.839429 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:58.839459 808013 retry.go:31] will retry after 3.12561263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:59.375248 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0414 11:38:59.539000 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:38:59.539026 808013 retry.go:31] will retry after 1.773862056s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:39:00.142359 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0414 11:39:00.361323 808013 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:39:00.361351 808013 retry.go:31] will retry after 2.033911814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0414 11:39:00.429898 808013 node_ready.go:53] error getting node "old-k8s-version-943255": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-943255": dial tcp 192.168.76.2:8443: connect: connection refused
I0414 11:39:01.313497 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0414 11:39:01.965601 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0414 11:39:02.125032 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0414 11:39:02.395462 808013 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0414 11:39:09.912872 808013 node_ready.go:49] node "old-k8s-version-943255" has status "Ready":"True"
I0414 11:39:09.912896 808013 node_ready.go:38] duration metric: took 17.484370269s for node "old-k8s-version-943255" to be "Ready" ...
I0414 11:39:09.912906 808013 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0414 11:39:10.053957 808013 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-b6hq8" in "kube-system" namespace to be "Ready" ...
I0414 11:39:10.123631 808013 pod_ready.go:93] pod "coredns-74ff55c5b-b6hq8" in "kube-system" namespace has status "Ready":"True"
I0414 11:39:10.123707 808013 pod_ready.go:82] duration metric: took 69.668379ms for pod "coredns-74ff55c5b-b6hq8" in "kube-system" namespace to be "Ready" ...
I0414 11:39:10.123733 808013 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-943255" in "kube-system" namespace to be "Ready" ...
I0414 11:39:10.496189 808013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.182646909s)
I0414 11:39:11.020706 808013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.89564027s)
I0414 11:39:11.020808 808013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.625321086s)
I0414 11:39:11.020826 808013 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-943255"
I0414 11:39:11.020895 808013 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.05526104s)
I0414 11:39:11.024187 808013 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-943255 addons enable metrics-server
I0414 11:39:11.027222 808013 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I0414 11:39:11.030384 808013 addons.go:514] duration metric: took 18.89500572s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I0414 11:39:12.129143 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:14.628666 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:16.628804 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:18.631534 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:21.129074 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:23.129452 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:25.634097 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:28.139472 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:30.629224 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:32.657591 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:35.129715 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:37.130571 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:39.635384 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:42.130130 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:44.629688 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:47.128548 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:49.129514 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:51.628798 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:53.630319 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:55.630788 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:39:57.630848 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:00.134139 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:02.628606 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:04.629534 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:07.129295 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:09.129380 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:11.129871 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:13.130721 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:15.135363 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:17.135479 808013 pod_ready.go:103] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:18.629889 808013 pod_ready.go:93] pod "etcd-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"True"
I0414 11:40:18.629911 808013 pod_ready.go:82] duration metric: took 1m8.506157324s for pod "etcd-old-k8s-version-943255" in "kube-system" namespace to be "Ready" ...
I0414 11:40:18.629925 808013 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-943255" in "kube-system" namespace to be "Ready" ...
I0414 11:40:18.633596 808013 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"True"
I0414 11:40:18.633617 808013 pod_ready.go:82] duration metric: took 3.68437ms for pod "kube-apiserver-old-k8s-version-943255" in "kube-system" namespace to be "Ready" ...
I0414 11:40:18.633627 808013 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace to be "Ready" ...
I0414 11:40:20.638714 808013 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:22.639737 808013 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:25.138533 808013 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:27.638868 808013 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:30.138465 808013 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:32.138818 808013 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:34.638833 808013 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:35.638572 808013 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"True"
I0414 11:40:35.638596 808013 pod_ready.go:82] duration metric: took 17.004960705s for pod "kube-controller-manager-old-k8s-version-943255" in "kube-system" namespace to be "Ready" ...
I0414 11:40:35.638609 808013 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rhrdw" in "kube-system" namespace to be "Ready" ...
I0414 11:40:35.642347 808013 pod_ready.go:93] pod "kube-proxy-rhrdw" in "kube-system" namespace has status "Ready":"True"
I0414 11:40:35.642371 808013 pod_ready.go:82] duration metric: took 3.754023ms for pod "kube-proxy-rhrdw" in "kube-system" namespace to be "Ready" ...
I0414 11:40:35.642381 808013 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-943255" in "kube-system" namespace to be "Ready" ...
I0414 11:40:35.645705 808013 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-943255" in "kube-system" namespace has status "Ready":"True"
I0414 11:40:35.645725 808013 pod_ready.go:82] duration metric: took 3.336411ms for pod "kube-scheduler-old-k8s-version-943255" in "kube-system" namespace to be "Ready" ...
I0414 11:40:35.645735 808013 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace to be "Ready" ...
I0414 11:40:37.650745 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:40.149640 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:42.166268 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:44.650556 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:46.651533 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:48.651569 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:51.150455 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:53.152034 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:55.651125 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:40:58.150867 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:00.151815 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:02.650682 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:04.650974 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:07.150265 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:09.150952 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:11.650526 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:13.651509 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:15.653052 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:18.150472 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:20.151319 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:22.650266 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:24.650303 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:27.151056 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:29.655451 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:32.150746 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:34.150903 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:36.151442 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:38.652966 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:41.150222 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:43.150852 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:45.151678 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:47.650935 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:49.651078 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:52.151182 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:54.657844 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:57.151407 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:41:59.650744 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:01.651558 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:04.151046 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:06.650623 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:08.650785 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:11.151086 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:13.151819 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:15.652882 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:18.151069 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:20.650640 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:22.651504 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:25.150947 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:27.154835 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:29.650802 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:31.652345 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:34.150804 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:36.650485 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:38.651491 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:41.150777 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:43.650621 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:45.650891 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:48.150428 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:50.151431 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:52.650830 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:54.652615 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:57.149717 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:42:59.150687 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:01.650201 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:03.650857 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:06.150398 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:08.150788 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:10.151466 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:12.651872 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:15.150724 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:17.650842 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:19.650918 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:21.732789 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:24.150687 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:26.151587 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:28.650883 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:30.651282 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:32.652045 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:34.652676 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:36.655943 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:39.151298 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:41.650977 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:43.652052 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:46.150449 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:48.150575 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:50.150951 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:52.150985 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:54.650962 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:56.651439 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:43:59.150723 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:01.151026 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:03.151379 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:05.650815 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:07.652079 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:10.150936 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:12.213829 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:14.651279 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:17.151326 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:19.151681 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:21.650607 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:23.651001 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:25.651703 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:27.651830 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:29.652576 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:32.152298 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:34.152828 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:35.645880 808013 pod_ready.go:82] duration metric: took 4m0.000127627s for pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace to be "Ready" ...
E0414 11:44:35.645914 808013 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0414 11:44:35.645924 808013 pod_ready.go:39] duration metric: took 5m25.733004506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0414 11:44:35.645943 808013 api_server.go:52] waiting for apiserver process to appear ...
I0414 11:44:35.645987 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0414 11:44:35.646052 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0414 11:44:35.707213 808013 cri.go:89] found id: "e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8"
I0414 11:44:35.707232 808013 cri.go:89] found id: "461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:44:35.707238 808013 cri.go:89] found id: ""
I0414 11:44:35.707246 808013 logs.go:282] 2 containers: [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8 461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42]
I0414 11:44:35.707300 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.711354 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.714967 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0414 11:44:35.715030 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0414 11:44:35.764599 808013 cri.go:89] found id: "22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29"
I0414 11:44:35.764619 808013 cri.go:89] found id: "dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:44:35.764624 808013 cri.go:89] found id: ""
I0414 11:44:35.764631 808013 logs.go:282] 2 containers: [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29 dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c]
I0414 11:44:35.764691 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.769219 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.773412 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0414 11:44:35.773552 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0414 11:44:35.822306 808013 cri.go:89] found id: "0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1"
I0414 11:44:35.822385 808013 cri.go:89] found id: "33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:44:35.822405 808013 cri.go:89] found id: ""
I0414 11:44:35.822429 808013 logs.go:282] 2 containers: [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1 33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507]
I0414 11:44:35.822547 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.830286 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.835154 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0414 11:44:35.835274 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0414 11:44:35.918515 808013 cri.go:89] found id: "1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e"
I0414 11:44:35.918538 808013 cri.go:89] found id: "f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:44:35.918543 808013 cri.go:89] found id: ""
I0414 11:44:35.918550 808013 logs.go:282] 2 containers: [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993]
I0414 11:44:35.918605 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.922874 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.926497 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0414 11:44:35.926568 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0414 11:44:35.968096 808013 cri.go:89] found id: "79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803"
I0414 11:44:35.968115 808013 cri.go:89] found id: "13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:44:35.968121 808013 cri.go:89] found id: ""
I0414 11:44:35.968129 808013 logs.go:282] 2 containers: [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803 13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1]
I0414 11:44:35.968182 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.972186 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.976008 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0414 11:44:35.976077 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0414 11:44:36.027663 808013 cri.go:89] found id: "2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6"
I0414 11:44:36.027688 808013 cri.go:89] found id: "d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:44:36.027695 808013 cri.go:89] found id: ""
I0414 11:44:36.027702 808013 logs.go:282] 2 containers: [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6 d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6]
I0414 11:44:36.027763 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.032426 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.036295 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0414 11:44:36.036366 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0414 11:44:36.118257 808013 cri.go:89] found id: "54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d"
I0414 11:44:36.118281 808013 cri.go:89] found id: "e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:44:36.118286 808013 cri.go:89] found id: ""
I0414 11:44:36.118293 808013 logs.go:282] 2 containers: [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265]
I0414 11:44:36.118353 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.122642 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.127259 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0414 11:44:36.127347 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0414 11:44:36.174126 808013 cri.go:89] found id: "e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa"
I0414 11:44:36.174145 808013 cri.go:89] found id: "daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9"
I0414 11:44:36.174150 808013 cri.go:89] found id: ""
I0414 11:44:36.174158 808013 logs.go:282] 2 containers: [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9]
I0414 11:44:36.174218 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.178084 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.181804 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0414 11:44:36.181872 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0414 11:44:36.251384 808013 cri.go:89] found id: "59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566"
I0414 11:44:36.251408 808013 cri.go:89] found id: ""
I0414 11:44:36.251416 808013 logs.go:282] 1 containers: [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566]
I0414 11:44:36.251473 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.255371 808013 logs.go:123] Gathering logs for dmesg ...
I0414 11:44:36.255396 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0414 11:44:36.272452 808013 logs.go:123] Gathering logs for etcd [dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c] ...
I0414 11:44:36.272478 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:44:36.330279 808013 logs.go:123] Gathering logs for kube-scheduler [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e] ...
I0414 11:44:36.330308 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e"
I0414 11:44:36.383084 808013 logs.go:123] Gathering logs for kube-proxy [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803] ...
I0414 11:44:36.383113 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803"
I0414 11:44:36.442830 808013 logs.go:123] Gathering logs for storage-provisioner [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa] ...
I0414 11:44:36.442858 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa"
I0414 11:44:36.495051 808013 logs.go:123] Gathering logs for container status ...
I0414 11:44:36.495079 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0414 11:44:36.544394 808013 logs.go:123] Gathering logs for describe nodes ...
I0414 11:44:36.544425 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0414 11:44:36.779585 808013 logs.go:123] Gathering logs for coredns [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1] ...
I0414 11:44:36.779619 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1"
I0414 11:44:36.826566 808013 logs.go:123] Gathering logs for kube-controller-manager [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6] ...
I0414 11:44:36.826591 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6"
I0414 11:44:36.900218 808013 logs.go:123] Gathering logs for storage-provisioner [daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9] ...
I0414 11:44:36.900315 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9"
I0414 11:44:36.958930 808013 logs.go:123] Gathering logs for kube-apiserver [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8] ...
I0414 11:44:36.958961 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8"
I0414 11:44:37.042579 808013 logs.go:123] Gathering logs for etcd [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29] ...
I0414 11:44:37.042697 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29"
I0414 11:44:37.116857 808013 logs.go:123] Gathering logs for kube-proxy [13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1] ...
I0414 11:44:37.117086 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:44:37.179108 808013 logs.go:123] Gathering logs for kube-controller-manager [d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6] ...
I0414 11:44:37.179189 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:44:37.253469 808013 logs.go:123] Gathering logs for kindnet [e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265] ...
I0414 11:44:37.253548 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:44:37.324129 808013 logs.go:123] Gathering logs for containerd ...
I0414 11:44:37.324153 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0414 11:44:37.396825 808013 logs.go:123] Gathering logs for kubelet ...
I0414 11:44:37.396907 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0414 11:44:37.459287 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.671664 661 reflector.go:138] object-"kube-system"/"kube-proxy-token-f29ww": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-f29ww" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.459575 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.678711 661 reflector.go:138] object-"kube-system"/"kindnet-token-srz8d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-srz8d" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.459824 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.679945 661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.460067 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.681204 661 reflector.go:138] object-"kube-system"/"coredns-token-lkmz8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-lkmz8" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.460320 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.681667 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.460557 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.689142 661 reflector.go:138] object-"default"/"default-token-6r6dc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6r6dc" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.464420 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.710198 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cbmxs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cbmxs" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.464698 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.823295 661 reflector.go:138] object-"kube-system"/"metrics-server-token-9hmzl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9hmzl" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.472775 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:12 old-k8s-version-943255 kubelet[661]: E0414 11:39:12.493034 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.473017 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:12 old-k8s-version-943255 kubelet[661]: E0414 11:39:12.517194 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.475829 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:27 old-k8s-version-943255 kubelet[661]: E0414 11:39:27.201995 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.478046 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:34 old-k8s-version-943255 kubelet[661]: E0414 11:39:34.614078 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.478423 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:35 old-k8s-version-943255 kubelet[661]: E0414 11:39:35.636481 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.478777 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:36 old-k8s-version-943255 kubelet[661]: E0414 11:39:36.642188 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.478985 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:39 old-k8s-version-943255 kubelet[661]: E0414 11:39:39.192063 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.479803 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:42 old-k8s-version-943255 kubelet[661]: E0414 11:39:42.656633 661 pod_workers.go:191] Error syncing pod 70b78d06-fcec-4cd3-9143-9c2bd9176c52 ("storage-provisioner_kube-system(70b78d06-fcec-4cd3-9143-9c2bd9176c52)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(70b78d06-fcec-4cd3-9143-9c2bd9176c52)"
W0414 11:44:37.480421 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:47 old-k8s-version-943255 kubelet[661]: E0414 11:39:47.688060 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.485301 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:54 old-k8s-version-943255 kubelet[661]: E0414 11:39:54.213195 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.485704 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:55 old-k8s-version-943255 kubelet[661]: E0414 11:39:55.053328 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.486062 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:09 old-k8s-version-943255 kubelet[661]: E0414 11:40:09.192116 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.486679 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:10 old-k8s-version-943255 kubelet[661]: E0414 11:40:10.769424 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.487042 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:15 old-k8s-version-943255 kubelet[661]: E0414 11:40:15.053424 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.487261 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:24 old-k8s-version-943255 kubelet[661]: E0414 11:40:24.192449 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.487611 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:27 old-k8s-version-943255 kubelet[661]: E0414 11:40:27.191843 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.490088 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:38 old-k8s-version-943255 kubelet[661]: E0414 11:40:38.205282 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.490441 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:39 old-k8s-version-943255 kubelet[661]: E0414 11:40:39.191989 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.490650 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:49 old-k8s-version-943255 kubelet[661]: E0414 11:40:49.192230 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.491273 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:53 old-k8s-version-943255 kubelet[661]: E0414 11:40:53.882336 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.491633 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:55 old-k8s-version-943255 kubelet[661]: E0414 11:40:55.053117 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.491842 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:02 old-k8s-version-943255 kubelet[661]: E0414 11:41:02.195946 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.492237 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:06 old-k8s-version-943255 kubelet[661]: E0414 11:41:06.191978 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.492502 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:13 old-k8s-version-943255 kubelet[661]: E0414 11:41:13.192235 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.492862 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:21 old-k8s-version-943255 kubelet[661]: E0414 11:41:21.191820 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.493305 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:25 old-k8s-version-943255 kubelet[661]: E0414 11:41:25.192141 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.493673 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:34 old-k8s-version-943255 kubelet[661]: E0414 11:41:34.191786 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.493892 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:39 old-k8s-version-943255 kubelet[661]: E0414 11:41:39.192207 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.494265 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:47 old-k8s-version-943255 kubelet[661]: E0414 11:41:47.191837 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.494510 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:52 old-k8s-version-943255 kubelet[661]: E0414 11:41:52.194045 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.494865 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:00 old-k8s-version-943255 kubelet[661]: E0414 11:42:00.196599 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.497330 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:04 old-k8s-version-943255 kubelet[661]: E0414 11:42:04.200603 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.497985 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:16 old-k8s-version-943255 kubelet[661]: E0414 11:42:16.072747 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.498194 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:18 old-k8s-version-943255 kubelet[661]: E0414 11:42:18.200708 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.498551 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:25 old-k8s-version-943255 kubelet[661]: E0414 11:42:25.053597 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.498765 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:30 old-k8s-version-943255 kubelet[661]: E0414 11:42:30.192906 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.499164 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:37 old-k8s-version-943255 kubelet[661]: E0414 11:42:37.192466 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.499401 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:41 old-k8s-version-943255 kubelet[661]: E0414 11:42:41.192467 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.499793 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:49 old-k8s-version-943255 kubelet[661]: E0414 11:42:49.191798 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.500002 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:52 old-k8s-version-943255 kubelet[661]: E0414 11:42:52.192457 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.500398 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:01 old-k8s-version-943255 kubelet[661]: E0414 11:43:01.191833 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.500634 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:04 old-k8s-version-943255 kubelet[661]: E0414 11:43:04.193465 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.500989 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:13 old-k8s-version-943255 kubelet[661]: E0414 11:43:13.191790 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.501207 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:19 old-k8s-version-943255 kubelet[661]: E0414 11:43:19.192065 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.501566 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:25 old-k8s-version-943255 kubelet[661]: E0414 11:43:25.191821 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.501786 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:33 old-k8s-version-943255 kubelet[661]: E0414 11:43:33.192850 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.502138 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:39 old-k8s-version-943255 kubelet[661]: E0414 11:43:39.191923 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.502347 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:48 old-k8s-version-943255 kubelet[661]: E0414 11:43:48.192882 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.502694 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:54 old-k8s-version-943255 kubelet[661]: E0414 11:43:54.192005 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.502902 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:59 old-k8s-version-943255 kubelet[661]: E0414 11:43:59.193181 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.503257 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:05 old-k8s-version-943255 kubelet[661]: E0414 11:44:05.192322 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.503464 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:11 old-k8s-version-943255 kubelet[661]: E0414 11:44:11.192365 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.503864 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.504075 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.504438 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.504652 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0414 11:44:37.504678 808013 logs.go:123] Gathering logs for kube-apiserver [461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42] ...
I0414 11:44:37.504708 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:44:37.571239 808013 logs.go:123] Gathering logs for coredns [33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507] ...
I0414 11:44:37.571325 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:44:37.625288 808013 logs.go:123] Gathering logs for kube-scheduler [f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993] ...
I0414 11:44:37.625358 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:44:37.681927 808013 logs.go:123] Gathering logs for kindnet [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d] ...
I0414 11:44:37.681964 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d"
I0414 11:44:37.744768 808013 logs.go:123] Gathering logs for kubernetes-dashboard [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566] ...
I0414 11:44:37.744804 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566"
I0414 11:44:37.801721 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:37.801748 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0414 11:44:37.801858 808013 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0414 11:44:37.801875 808013 out.go:270] Apr 14 11:44:11 old-k8s-version-943255 kubelet[661]: E0414 11:44:11.192365 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:11 old-k8s-version-943255 kubelet[661]: E0414 11:44:11.192365 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.801882 808013 out.go:270] Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.801891 808013 out.go:270] Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.802008 808013 out.go:270] Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.802018 808013 out.go:270] Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0414 11:44:37.802024 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:37.802031 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:44:47.803431 808013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0414 11:44:47.816139 808013 api_server.go:72] duration metric: took 5m55.681082464s to wait for apiserver process to appear ...
I0414 11:44:47.816174 808013 api_server.go:88] waiting for apiserver healthz status ...
I0414 11:44:47.816213 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0414 11:44:47.816268 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0414 11:44:47.872941 808013 cri.go:89] found id: "e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8"
I0414 11:44:47.872961 808013 cri.go:89] found id: "461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:44:47.872968 808013 cri.go:89] found id: ""
I0414 11:44:47.872975 808013 logs.go:282] 2 containers: [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8 461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42]
I0414 11:44:47.873036 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:47.877393 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:47.881537 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0414 11:44:47.881608 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0414 11:44:47.951247 808013 cri.go:89] found id: "22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29"
I0414 11:44:47.951266 808013 cri.go:89] found id: "dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:44:47.951271 808013 cri.go:89] found id: ""
I0414 11:44:47.951278 808013 logs.go:282] 2 containers: [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29 dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c]
I0414 11:44:47.951339 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:47.955570 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:47.960046 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0414 11:44:47.960114 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0414 11:44:48.016776 808013 cri.go:89] found id: "0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1"
I0414 11:44:48.016798 808013 cri.go:89] found id: "33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:44:48.016803 808013 cri.go:89] found id: ""
I0414 11:44:48.016810 808013 logs.go:282] 2 containers: [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1 33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507]
I0414 11:44:48.016869 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.021594 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.026096 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0414 11:44:48.026187 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0414 11:44:48.078498 808013 cri.go:89] found id: "1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e"
I0414 11:44:48.078570 808013 cri.go:89] found id: "f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:44:48.078588 808013 cri.go:89] found id: ""
I0414 11:44:48.078612 808013 logs.go:282] 2 containers: [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993]
I0414 11:44:48.078706 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.083758 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.088172 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0414 11:44:48.088249 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0414 11:44:48.137517 808013 cri.go:89] found id: "79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803"
I0414 11:44:48.137536 808013 cri.go:89] found id: "13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:44:48.137541 808013 cri.go:89] found id: ""
I0414 11:44:48.137549 808013 logs.go:282] 2 containers: [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803 13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1]
I0414 11:44:48.137605 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.142443 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.146807 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0414 11:44:48.146876 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0414 11:44:48.211322 808013 cri.go:89] found id: "2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6"
I0414 11:44:48.211341 808013 cri.go:89] found id: "d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:44:48.211346 808013 cri.go:89] found id: ""
I0414 11:44:48.211353 808013 logs.go:282] 2 containers: [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6 d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6]
I0414 11:44:48.211419 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.215344 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.220867 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0414 11:44:48.220986 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0414 11:44:48.290792 808013 cri.go:89] found id: "54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d"
I0414 11:44:48.290868 808013 cri.go:89] found id: "e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:44:48.290888 808013 cri.go:89] found id: ""
I0414 11:44:48.290909 808013 logs.go:282] 2 containers: [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265]
I0414 11:44:48.290995 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.294782 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.298559 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0414 11:44:48.298692 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0414 11:44:48.342392 808013 cri.go:89] found id: "e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa"
I0414 11:44:48.342467 808013 cri.go:89] found id: "daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9"
I0414 11:44:48.342485 808013 cri.go:89] found id: ""
I0414 11:44:48.342507 808013 logs.go:282] 2 containers: [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9]
I0414 11:44:48.342598 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.346339 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.349761 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0414 11:44:48.349914 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0414 11:44:48.401259 808013 cri.go:89] found id: "59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566"
I0414 11:44:48.401283 808013 cri.go:89] found id: ""
I0414 11:44:48.401291 808013 logs.go:282] 1 containers: [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566]
I0414 11:44:48.401374 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.405041 808013 logs.go:123] Gathering logs for coredns [33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507] ...
I0414 11:44:48.405061 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:44:48.453766 808013 logs.go:123] Gathering logs for kube-controller-manager [d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6] ...
I0414 11:44:48.453818 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:44:48.531990 808013 logs.go:123] Gathering logs for kindnet [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d] ...
I0414 11:44:48.532069 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d"
I0414 11:44:48.612437 808013 logs.go:123] Gathering logs for storage-provisioner [daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9] ...
I0414 11:44:48.612515 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9"
I0414 11:44:48.666338 808013 logs.go:123] Gathering logs for kube-apiserver [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8] ...
I0414 11:44:48.666414 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8"
I0414 11:44:48.735137 808013 logs.go:123] Gathering logs for kube-apiserver [461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42] ...
I0414 11:44:48.735212 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:44:48.803249 808013 logs.go:123] Gathering logs for etcd [dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c] ...
I0414 11:44:48.803325 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:44:48.848323 808013 logs.go:123] Gathering logs for kube-proxy [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803] ...
I0414 11:44:48.848474 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803"
I0414 11:44:48.915130 808013 logs.go:123] Gathering logs for kube-controller-manager [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6] ...
I0414 11:44:48.915200 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6"
I0414 11:44:49.022893 808013 logs.go:123] Gathering logs for kindnet [e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265] ...
I0414 11:44:49.022926 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:44:49.095606 808013 logs.go:123] Gathering logs for kubernetes-dashboard [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566] ...
I0414 11:44:49.095676 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566"
I0414 11:44:49.150412 808013 logs.go:123] Gathering logs for kubelet ...
I0414 11:44:49.150492 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0414 11:44:49.229956 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.671664 661 reflector.go:138] object-"kube-system"/"kube-proxy-token-f29ww": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-f29ww" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.230231 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.678711 661 reflector.go:138] object-"kube-system"/"kindnet-token-srz8d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-srz8d" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.230461 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.679945 661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.230689 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.681204 661 reflector.go:138] object-"kube-system"/"coredns-token-lkmz8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-lkmz8" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.230907 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.681667 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.231134 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.689142 661 reflector.go:138] object-"default"/"default-token-6r6dc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6r6dc" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.234893 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.710198 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cbmxs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cbmxs" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.235118 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.823295 661 reflector.go:138] object-"kube-system"/"metrics-server-token-9hmzl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9hmzl" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.243126 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:12 old-k8s-version-943255 kubelet[661]: E0414 11:39:12.493034 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.243360 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:12 old-k8s-version-943255 kubelet[661]: E0414 11:39:12.517194 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.246196 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:27 old-k8s-version-943255 kubelet[661]: E0414 11:39:27.201995 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.248284 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:34 old-k8s-version-943255 kubelet[661]: E0414 11:39:34.614078 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.248631 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:35 old-k8s-version-943255 kubelet[661]: E0414 11:39:35.636481 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.249036 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:36 old-k8s-version-943255 kubelet[661]: E0414 11:39:36.642188 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.249240 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:39 old-k8s-version-943255 kubelet[661]: E0414 11:39:39.192063 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.250082 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:42 old-k8s-version-943255 kubelet[661]: E0414 11:39:42.656633 661 pod_workers.go:191] Error syncing pod 70b78d06-fcec-4cd3-9143-9c2bd9176c52 ("storage-provisioner_kube-system(70b78d06-fcec-4cd3-9143-9c2bd9176c52)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(70b78d06-fcec-4cd3-9143-9c2bd9176c52)"
W0414 11:44:49.250700 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:47 old-k8s-version-943255 kubelet[661]: E0414 11:39:47.688060 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.253604 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:54 old-k8s-version-943255 kubelet[661]: E0414 11:39:54.213195 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.253942 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:55 old-k8s-version-943255 kubelet[661]: E0414 11:39:55.053328 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.254482 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:09 old-k8s-version-943255 kubelet[661]: E0414 11:40:09.192116 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.255118 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:10 old-k8s-version-943255 kubelet[661]: E0414 11:40:10.769424 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.255538 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:15 old-k8s-version-943255 kubelet[661]: E0414 11:40:15.053424 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.255724 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:24 old-k8s-version-943255 kubelet[661]: E0414 11:40:24.192449 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.256047 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:27 old-k8s-version-943255 kubelet[661]: E0414 11:40:27.191843 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.258929 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:38 old-k8s-version-943255 kubelet[661]: E0414 11:40:38.205282 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.259299 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:39 old-k8s-version-943255 kubelet[661]: E0414 11:40:39.191989 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.259508 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:49 old-k8s-version-943255 kubelet[661]: E0414 11:40:49.192230 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.260116 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:53 old-k8s-version-943255 kubelet[661]: E0414 11:40:53.882336 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.260460 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:55 old-k8s-version-943255 kubelet[661]: E0414 11:40:55.053117 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.260682 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:02 old-k8s-version-943255 kubelet[661]: E0414 11:41:02.195946 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.261040 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:06 old-k8s-version-943255 kubelet[661]: E0414 11:41:06.191978 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.261245 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:13 old-k8s-version-943255 kubelet[661]: E0414 11:41:13.192235 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.261591 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:21 old-k8s-version-943255 kubelet[661]: E0414 11:41:21.191820 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.261806 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:25 old-k8s-version-943255 kubelet[661]: E0414 11:41:25.192141 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.262151 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:34 old-k8s-version-943255 kubelet[661]: E0414 11:41:34.191786 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.262353 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:39 old-k8s-version-943255 kubelet[661]: E0414 11:41:39.192207 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.262702 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:47 old-k8s-version-943255 kubelet[661]: E0414 11:41:47.191837 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.262902 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:52 old-k8s-version-943255 kubelet[661]: E0414 11:41:52.194045 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.263247 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:00 old-k8s-version-943255 kubelet[661]: E0414 11:42:00.196599 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.265879 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:04 old-k8s-version-943255 kubelet[661]: E0414 11:42:04.200603 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.266542 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:16 old-k8s-version-943255 kubelet[661]: E0414 11:42:16.072747 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.266749 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:18 old-k8s-version-943255 kubelet[661]: E0414 11:42:18.200708 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.267094 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:25 old-k8s-version-943255 kubelet[661]: E0414 11:42:25.053597 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.267293 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:30 old-k8s-version-943255 kubelet[661]: E0414 11:42:30.192906 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.267657 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:37 old-k8s-version-943255 kubelet[661]: E0414 11:42:37.192466 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.267861 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:41 old-k8s-version-943255 kubelet[661]: E0414 11:42:41.192467 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.268203 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:49 old-k8s-version-943255 kubelet[661]: E0414 11:42:49.191798 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.268402 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:52 old-k8s-version-943255 kubelet[661]: E0414 11:42:52.192457 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.268863 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:01 old-k8s-version-943255 kubelet[661]: E0414 11:43:01.191833 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.269073 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:04 old-k8s-version-943255 kubelet[661]: E0414 11:43:04.193465 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.269419 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:13 old-k8s-version-943255 kubelet[661]: E0414 11:43:13.191790 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.269619 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:19 old-k8s-version-943255 kubelet[661]: E0414 11:43:19.192065 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.269970 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:25 old-k8s-version-943255 kubelet[661]: E0414 11:43:25.191821 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.270175 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:33 old-k8s-version-943255 kubelet[661]: E0414 11:43:33.192850 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.270517 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:39 old-k8s-version-943255 kubelet[661]: E0414 11:43:39.191923 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.270717 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:48 old-k8s-version-943255 kubelet[661]: E0414 11:43:48.192882 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.271077 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:54 old-k8s-version-943255 kubelet[661]: E0414 11:43:54.192005 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.271281 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:59 old-k8s-version-943255 kubelet[661]: E0414 11:43:59.193181 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.271688 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:05 old-k8s-version-943255 kubelet[661]: E0414 11:44:05.192322 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.271908 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:11 old-k8s-version-943255 kubelet[661]: E0414 11:44:11.192365 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.272252 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.272455 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.272804 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.273004 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.273348 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:41 old-k8s-version-943255 kubelet[661]: E0414 11:44:41.192234 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
I0414 11:44:49.273379 808013 logs.go:123] Gathering logs for describe nodes ...
I0414 11:44:49.273408 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0414 11:44:49.445671 808013 logs.go:123] Gathering logs for coredns [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1] ...
I0414 11:44:49.445743 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1"
I0414 11:44:49.499885 808013 logs.go:123] Gathering logs for kube-proxy [13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1] ...
I0414 11:44:49.499954 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:44:49.558047 808013 logs.go:123] Gathering logs for containerd ...
I0414 11:44:49.558124 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0414 11:44:49.621804 808013 logs.go:123] Gathering logs for container status ...
I0414 11:44:49.621880 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0414 11:44:49.685699 808013 logs.go:123] Gathering logs for dmesg ...
I0414 11:44:49.685873 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0414 11:44:49.708488 808013 logs.go:123] Gathering logs for kube-scheduler [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e] ...
I0414 11:44:49.708560 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e"
I0414 11:44:49.761525 808013 logs.go:123] Gathering logs for kube-scheduler [f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993] ...
I0414 11:44:49.761595 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:44:49.816691 808013 logs.go:123] Gathering logs for storage-provisioner [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa] ...
I0414 11:44:49.816767 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa"
I0414 11:44:49.871098 808013 logs.go:123] Gathering logs for etcd [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29] ...
I0414 11:44:49.871168 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29"
I0414 11:44:49.957340 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:49.957368 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0414 11:44:49.957436 808013 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0414 11:44:49.957445 808013 out.go:270] Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.957453 808013 out.go:270] Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.957460 808013 out.go:270] Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.957467 808013 out.go:270] Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.957542 808013 out.go:270] Apr 14 11:44:41 old-k8s-version-943255 kubelet[661]: E0414 11:44:41.192234 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:41 old-k8s-version-943255 kubelet[661]: E0414 11:44:41.192234 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
I0414 11:44:49.957550 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:49.957557 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:44:59.959458 808013 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0414 11:44:59.970865 808013 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0414 11:44:59.974990 808013 out.go:201]
W0414 11:44:59.978044 808013 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0414 11:44:59.978134 808013 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0414 11:44:59.978193 808013 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0414 11:44:59.978233 808013 out.go:270] *
*
W0414 11:44:59.979165 808013 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0414 11:44:59.983029 808013 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-943255 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-943255
helpers_test.go:235: (dbg) docker inspect old-k8s-version-943255:
-- stdout --
[
{
"Id": "5339fe2ede9ae15d2cc51d4824932529455c1c44195a618ff255313a1292c8bc",
"Created": "2025-04-14T11:35:50.769909012Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 808139,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-14T11:38:44.051820394Z",
"FinishedAt": "2025-04-14T11:38:43.104548422Z"
},
"Image": "sha256:e51065ad0661308920dfd7c7ddda445e530a6bf56321f8317cb47e1df0975e7c",
"ResolvConfPath": "/var/lib/docker/containers/5339fe2ede9ae15d2cc51d4824932529455c1c44195a618ff255313a1292c8bc/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/5339fe2ede9ae15d2cc51d4824932529455c1c44195a618ff255313a1292c8bc/hostname",
"HostsPath": "/var/lib/docker/containers/5339fe2ede9ae15d2cc51d4824932529455c1c44195a618ff255313a1292c8bc/hosts",
"LogPath": "/var/lib/docker/containers/5339fe2ede9ae15d2cc51d4824932529455c1c44195a618ff255313a1292c8bc/5339fe2ede9ae15d2cc51d4824932529455c1c44195a618ff255313a1292c8bc-json.log",
"Name": "/old-k8s-version-943255",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-943255:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-943255",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "5339fe2ede9ae15d2cc51d4824932529455c1c44195a618ff255313a1292c8bc",
"LowerDir": "/var/lib/docker/overlay2/d20170ac530955f18fb98f9f2d159c705959339474a7cbdc1b6e3bff14edbdb2-init/diff:/var/lib/docker/overlay2/0f1eb85186734ebd8050f79437d4950f7725c43c0f3bc0c52d0850bd86f1d9d3/diff",
"MergedDir": "/var/lib/docker/overlay2/d20170ac530955f18fb98f9f2d159c705959339474a7cbdc1b6e3bff14edbdb2/merged",
"UpperDir": "/var/lib/docker/overlay2/d20170ac530955f18fb98f9f2d159c705959339474a7cbdc1b6e3bff14edbdb2/diff",
"WorkDir": "/var/lib/docker/overlay2/d20170ac530955f18fb98f9f2d159c705959339474a7cbdc1b6e3bff14edbdb2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-943255",
"Source": "/var/lib/docker/volumes/old-k8s-version-943255/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-943255",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-943255",
"name.minikube.sigs.k8s.io": "old-k8s-version-943255",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "408c0d4a92713103fec936c06873e36a15e328300f148af6171337851866460d",
"SandboxKey": "/var/run/docker/netns/408c0d4a9271",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33806"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33807"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33810"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33808"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33809"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-943255": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "36:21:d1:9b:2f:ea",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "9211dd2fc00736d294ebebb687388176c287dab28666e1b574e151698de307b9",
"EndpointID": "62855fcfe018e8ba380cb5f4134731e3a36e219661df864ea300a4451ce0bfe5",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-943255",
"5339fe2ede9a"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-943255 -n old-k8s-version-943255
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-943255 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-943255 logs -n 25: (2.587140571s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| start | -p force-systemd-flag-092734 | force-systemd-flag-092734 | jenkins | v1.35.0 | 14 Apr 25 11:34 UTC | 14 Apr 25 11:35 UTC |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-flag-092734 | force-systemd-flag-092734 | jenkins | v1.35.0 | 14 Apr 25 11:35 UTC | 14 Apr 25 11:35 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-flag-092734 | force-systemd-flag-092734 | jenkins | v1.35.0 | 14 Apr 25 11:35 UTC | 14 Apr 25 11:35 UTC |
| start | -p cert-options-253833 | cert-options-253833 | jenkins | v1.35.0 | 14 Apr 25 11:35 UTC | 14 Apr 25 11:35 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-253833 ssh | cert-options-253833 | jenkins | v1.35.0 | 14 Apr 25 11:35 UTC | 14 Apr 25 11:35 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-253833 -- sudo | cert-options-253833 | jenkins | v1.35.0 | 14 Apr 25 11:35 UTC | 14 Apr 25 11:35 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-253833 | cert-options-253833 | jenkins | v1.35.0 | 14 Apr 25 11:35 UTC | 14 Apr 25 11:35 UTC |
| start | -p old-k8s-version-943255 | old-k8s-version-943255 | jenkins | v1.35.0 | 14 Apr 25 11:35 UTC | 14 Apr 25 11:38 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-541494 | cert-expiration-541494 | jenkins | v1.35.0 | 14 Apr 25 11:38 UTC | 14 Apr 25 11:38 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-541494 | cert-expiration-541494 | jenkins | v1.35.0 | 14 Apr 25 11:38 UTC | 14 Apr 25 11:38 UTC |
| start | -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:38 UTC | 14 Apr 25 11:39 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-943255 | old-k8s-version-943255 | jenkins | v1.35.0 | 14 Apr 25 11:38 UTC | 14 Apr 25 11:38 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-943255 | old-k8s-version-943255 | jenkins | v1.35.0 | 14 Apr 25 11:38 UTC | 14 Apr 25 11:38 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-943255 | old-k8s-version-943255 | jenkins | v1.35.0 | 14 Apr 25 11:38 UTC | 14 Apr 25 11:38 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-943255 | old-k8s-version-943255 | jenkins | v1.35.0 | 14 Apr 25 11:38 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:39 UTC | 14 Apr 25 11:39 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:39 UTC | 14 Apr 25 11:39 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:39 UTC | 14 Apr 25 11:39 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:39 UTC | 14 Apr 25 11:44 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| image | no-preload-391843 image list | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:44 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:44 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:44 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:44 UTC |
| delete | -p no-preload-391843 | no-preload-391843 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | 14 Apr 25 11:44 UTC |
| start | -p embed-certs-680698 | embed-certs-680698 | jenkins | v1.35.0 | 14 Apr 25 11:44 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/14 11:44:32
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0414 11:44:32.759487 818978 out.go:345] Setting OutFile to fd 1 ...
I0414 11:44:32.760027 818978 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:44:32.760063 818978 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:32.760084 818978 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:44:32.760430 818978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20534-594855/.minikube/bin
I0414 11:44:32.761007 818978 out.go:352] Setting JSON to false
I0414 11:44:32.762108 818978 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":12418,"bootTime":1744618655,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0414 11:44:32.762205 818978 start.go:139] virtualization:
I0414 11:44:32.766044 818978 out.go:177] * [embed-certs-680698] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0414 11:44:32.769300 818978 out.go:177] - MINIKUBE_LOCATION=20534
I0414 11:44:32.769502 818978 notify.go:220] Checking for updates...
I0414 11:44:32.775377 818978 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0414 11:44:32.778454 818978 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20534-594855/kubeconfig
I0414 11:44:32.781601 818978 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20534-594855/.minikube
I0414 11:44:32.784470 818978 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0414 11:44:32.787308 818978 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0414 11:44:32.790862 818978 config.go:182] Loaded profile config "old-k8s-version-943255": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0414 11:44:32.790982 818978 driver.go:394] Setting default libvirt URI to qemu:///system
I0414 11:44:32.825660 818978 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0414 11:44:32.825806 818978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0414 11:44:32.892528 818978 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-14 11:44:32.882705807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0414 11:44:32.892644 818978 docker.go:318] overlay module found
I0414 11:44:32.899937 818978 out.go:177] * Using the docker driver based on user configuration
I0414 11:44:32.903165 818978 start.go:297] selected driver: docker
I0414 11:44:32.903185 818978 start.go:901] validating driver "docker" against <nil>
I0414 11:44:32.903199 818978 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0414 11:44:32.904501 818978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0414 11:44:32.958893 818978 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-14 11:44:32.949649901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0414 11:44:32.959049 818978 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0414 11:44:32.959278 818978 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0414 11:44:32.962196 818978 out.go:177] * Using Docker driver with root privileges
I0414 11:44:32.964983 818978 cni.go:84] Creating CNI manager for ""
I0414 11:44:32.965061 818978 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 11:44:32.965075 818978 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0414 11:44:32.965150 818978 start.go:340] cluster config:
{Name:embed-certs-680698 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-680698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 11:44:32.968226 818978 out.go:177] * Starting "embed-certs-680698" primary control-plane node in "embed-certs-680698" cluster
I0414 11:44:32.970951 818978 cache.go:121] Beginning downloading kic base image for docker with containerd
I0414 11:44:32.973985 818978 out.go:177] * Pulling base image v0.0.46-1744107393-20604 ...
I0414 11:44:32.976963 818978 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 11:44:32.977027 818978 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20534-594855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
I0414 11:44:32.977040 818978 cache.go:56] Caching tarball of preloaded images
I0414 11:44:32.977059 818978 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
I0414 11:44:32.977132 818978 preload.go:172] Found /home/jenkins/minikube-integration/20534-594855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0414 11:44:32.977142 818978 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0414 11:44:32.977257 818978 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/config.json ...
I0414 11:44:32.977280 818978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/config.json: {Name:mk7b2143d50aa59013ffbcd385327dc0533d3da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:44:33.001992 818978 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon, skipping pull
I0414 11:44:33.002015 818978 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in daemon, skipping load
I0414 11:44:33.002034 818978 cache.go:230] Successfully downloaded all kic artifacts
I0414 11:44:33.002065 818978 start.go:360] acquireMachinesLock for embed-certs-680698: {Name:mkd79cff932d8c3a8a4d53b32250e29160e57c30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0414 11:44:33.002802 818978 start.go:364] duration metric: took 714.224µs to acquireMachinesLock for "embed-certs-680698"
I0414 11:44:33.002842 818978 start.go:93] Provisioning new machine with config: &{Name:embed-certs-680698 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-680698 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0414 11:44:33.002932 818978 start.go:125] createHost starting for "" (driver="docker")
I0414 11:44:29.652576 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:32.152298 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:33.007602 818978 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0414 11:44:33.007909 818978 start.go:159] libmachine.API.Create for "embed-certs-680698" (driver="docker")
I0414 11:44:33.007967 818978 client.go:168] LocalClient.Create starting
I0414 11:44:33.008054 818978 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem
I0414 11:44:33.008099 818978 main.go:141] libmachine: Decoding PEM data...
I0414 11:44:33.008119 818978 main.go:141] libmachine: Parsing certificate...
I0414 11:44:33.008188 818978 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20534-594855/.minikube/certs/cert.pem
I0414 11:44:33.008214 818978 main.go:141] libmachine: Decoding PEM data...
I0414 11:44:33.008225 818978 main.go:141] libmachine: Parsing certificate...
I0414 11:44:33.008691 818978 cli_runner.go:164] Run: docker network inspect embed-certs-680698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0414 11:44:33.028903 818978 cli_runner.go:211] docker network inspect embed-certs-680698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0414 11:44:33.028994 818978 network_create.go:284] running [docker network inspect embed-certs-680698] to gather additional debugging logs...
I0414 11:44:33.029015 818978 cli_runner.go:164] Run: docker network inspect embed-certs-680698
W0414 11:44:33.045946 818978 cli_runner.go:211] docker network inspect embed-certs-680698 returned with exit code 1
I0414 11:44:33.045992 818978 network_create.go:287] error running [docker network inspect embed-certs-680698]: docker network inspect embed-certs-680698: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-680698 not found
I0414 11:44:33.046006 818978 network_create.go:289] output of [docker network inspect embed-certs-680698]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-680698 not found
** /stderr **
I0414 11:44:33.046103 818978 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0414 11:44:33.064855 818978 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-da2afc92daab IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ee:7f:23:65:0d:d3} reservation:<nil>}
I0414 11:44:33.065228 818978 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0a17d43adfce IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:72:07:d3:b8:1e:c4} reservation:<nil>}
I0414 11:44:33.065533 818978 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f655451ccd45 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:9a:13:91:db:71} reservation:<nil>}
I0414 11:44:33.065880 818978 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9211dd2fc007 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:f1:06:a2:44:22} reservation:<nil>}
I0414 11:44:33.066318 818978 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a06fa0}
I0414 11:44:33.066341 818978 network_create.go:124] attempt to create docker network embed-certs-680698 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0414 11:44:33.066411 818978 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-680698 embed-certs-680698
I0414 11:44:33.130253 818978 network_create.go:108] docker network embed-certs-680698 192.168.85.0/24 created
I0414 11:44:33.130288 818978 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-680698" container
I0414 11:44:33.130368 818978 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0414 11:44:33.145872 818978 cli_runner.go:164] Run: docker volume create embed-certs-680698 --label name.minikube.sigs.k8s.io=embed-certs-680698 --label created_by.minikube.sigs.k8s.io=true
I0414 11:44:33.165675 818978 oci.go:103] Successfully created a docker volume embed-certs-680698
I0414 11:44:33.165760 818978 cli_runner.go:164] Run: docker run --rm --name embed-certs-680698-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-680698 --entrypoint /usr/bin/test -v embed-certs-680698:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -d /var/lib
I0414 11:44:33.736995 818978 oci.go:107] Successfully prepared a docker volume embed-certs-680698
I0414 11:44:33.737052 818978 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 11:44:33.737074 818978 kic.go:194] Starting extracting preloaded images to volume ...
I0414 11:44:33.737159 818978 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20534-594855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-680698:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir
I0414 11:44:34.152828 808013 pod_ready.go:103] pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace has status "Ready":"False"
I0414 11:44:35.645880 808013 pod_ready.go:82] duration metric: took 4m0.000127627s for pod "metrics-server-9975d5f86-7rqxd" in "kube-system" namespace to be "Ready" ...
E0414 11:44:35.645914 808013 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0414 11:44:35.645924 808013 pod_ready.go:39] duration metric: took 5m25.733004506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0414 11:44:35.645943 808013 api_server.go:52] waiting for apiserver process to appear ...
I0414 11:44:35.645987 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0414 11:44:35.646052 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0414 11:44:35.707213 808013 cri.go:89] found id: "e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8"
I0414 11:44:35.707232 808013 cri.go:89] found id: "461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:44:35.707238 808013 cri.go:89] found id: ""
I0414 11:44:35.707246 808013 logs.go:282] 2 containers: [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8 461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42]
I0414 11:44:35.707300 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.711354 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.714967 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0414 11:44:35.715030 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0414 11:44:35.764599 808013 cri.go:89] found id: "22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29"
I0414 11:44:35.764619 808013 cri.go:89] found id: "dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:44:35.764624 808013 cri.go:89] found id: ""
I0414 11:44:35.764631 808013 logs.go:282] 2 containers: [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29 dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c]
I0414 11:44:35.764691 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.769219 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.773412 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0414 11:44:35.773552 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0414 11:44:35.822306 808013 cri.go:89] found id: "0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1"
I0414 11:44:35.822385 808013 cri.go:89] found id: "33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:44:35.822405 808013 cri.go:89] found id: ""
I0414 11:44:35.822429 808013 logs.go:282] 2 containers: [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1 33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507]
I0414 11:44:35.822547 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.830286 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.835154 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0414 11:44:35.835274 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0414 11:44:35.918515 808013 cri.go:89] found id: "1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e"
I0414 11:44:35.918538 808013 cri.go:89] found id: "f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:44:35.918543 808013 cri.go:89] found id: ""
I0414 11:44:35.918550 808013 logs.go:282] 2 containers: [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993]
I0414 11:44:35.918605 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.922874 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.926497 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0414 11:44:35.926568 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0414 11:44:35.968096 808013 cri.go:89] found id: "79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803"
I0414 11:44:35.968115 808013 cri.go:89] found id: "13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:44:35.968121 808013 cri.go:89] found id: ""
I0414 11:44:35.968129 808013 logs.go:282] 2 containers: [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803 13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1]
I0414 11:44:35.968182 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.972186 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:35.976008 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0414 11:44:35.976077 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0414 11:44:36.027663 808013 cri.go:89] found id: "2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6"
I0414 11:44:36.027688 808013 cri.go:89] found id: "d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:44:36.027695 808013 cri.go:89] found id: ""
I0414 11:44:36.027702 808013 logs.go:282] 2 containers: [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6 d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6]
I0414 11:44:36.027763 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.032426 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.036295 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0414 11:44:36.036366 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0414 11:44:36.118257 808013 cri.go:89] found id: "54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d"
I0414 11:44:36.118281 808013 cri.go:89] found id: "e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:44:36.118286 808013 cri.go:89] found id: ""
I0414 11:44:36.118293 808013 logs.go:282] 2 containers: [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265]
I0414 11:44:36.118353 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.122642 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.127259 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0414 11:44:36.127347 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0414 11:44:36.174126 808013 cri.go:89] found id: "e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa"
I0414 11:44:36.174145 808013 cri.go:89] found id: "daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9"
I0414 11:44:36.174150 808013 cri.go:89] found id: ""
I0414 11:44:36.174158 808013 logs.go:282] 2 containers: [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9]
I0414 11:44:36.174218 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.178084 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.181804 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0414 11:44:36.181872 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0414 11:44:36.251384 808013 cri.go:89] found id: "59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566"
I0414 11:44:36.251408 808013 cri.go:89] found id: ""
I0414 11:44:36.251416 808013 logs.go:282] 1 containers: [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566]
I0414 11:44:36.251473 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:36.255371 808013 logs.go:123] Gathering logs for dmesg ...
I0414 11:44:36.255396 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0414 11:44:36.272452 808013 logs.go:123] Gathering logs for etcd [dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c] ...
I0414 11:44:36.272478 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:44:36.330279 808013 logs.go:123] Gathering logs for kube-scheduler [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e] ...
I0414 11:44:36.330308 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e"
I0414 11:44:36.383084 808013 logs.go:123] Gathering logs for kube-proxy [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803] ...
I0414 11:44:36.383113 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803"
I0414 11:44:36.442830 808013 logs.go:123] Gathering logs for storage-provisioner [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa] ...
I0414 11:44:36.442858 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa"
I0414 11:44:36.495051 808013 logs.go:123] Gathering logs for container status ...
I0414 11:44:36.495079 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0414 11:44:36.544394 808013 logs.go:123] Gathering logs for describe nodes ...
I0414 11:44:36.544425 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0414 11:44:36.779585 808013 logs.go:123] Gathering logs for coredns [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1] ...
I0414 11:44:36.779619 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1"
I0414 11:44:36.826566 808013 logs.go:123] Gathering logs for kube-controller-manager [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6] ...
I0414 11:44:36.826591 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6"
I0414 11:44:36.900218 808013 logs.go:123] Gathering logs for storage-provisioner [daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9] ...
I0414 11:44:36.900315 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9"
I0414 11:44:36.958930 808013 logs.go:123] Gathering logs for kube-apiserver [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8] ...
I0414 11:44:36.958961 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8"
I0414 11:44:37.042579 808013 logs.go:123] Gathering logs for etcd [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29] ...
I0414 11:44:37.042697 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29"
I0414 11:44:37.116857 808013 logs.go:123] Gathering logs for kube-proxy [13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1] ...
I0414 11:44:37.117086 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:44:37.179108 808013 logs.go:123] Gathering logs for kube-controller-manager [d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6] ...
I0414 11:44:37.179189 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:44:37.253469 808013 logs.go:123] Gathering logs for kindnet [e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265] ...
I0414 11:44:37.253548 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:44:37.324129 808013 logs.go:123] Gathering logs for containerd ...
I0414 11:44:37.324153 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0414 11:44:37.396825 808013 logs.go:123] Gathering logs for kubelet ...
I0414 11:44:37.396907 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0414 11:44:37.459287 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.671664 661 reflector.go:138] object-"kube-system"/"kube-proxy-token-f29ww": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-f29ww" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.459575 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.678711 661 reflector.go:138] object-"kube-system"/"kindnet-token-srz8d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-srz8d" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.459824 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.679945 661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.460067 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.681204 661 reflector.go:138] object-"kube-system"/"coredns-token-lkmz8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-lkmz8" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.460320 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.681667 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.460557 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.689142 661 reflector.go:138] object-"default"/"default-token-6r6dc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6r6dc" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.464420 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.710198 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cbmxs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cbmxs" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.464698 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.823295 661 reflector.go:138] object-"kube-system"/"metrics-server-token-9hmzl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9hmzl" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:37.472775 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:12 old-k8s-version-943255 kubelet[661]: E0414 11:39:12.493034 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.473017 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:12 old-k8s-version-943255 kubelet[661]: E0414 11:39:12.517194 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.475829 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:27 old-k8s-version-943255 kubelet[661]: E0414 11:39:27.201995 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.478046 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:34 old-k8s-version-943255 kubelet[661]: E0414 11:39:34.614078 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.478423 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:35 old-k8s-version-943255 kubelet[661]: E0414 11:39:35.636481 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.478777 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:36 old-k8s-version-943255 kubelet[661]: E0414 11:39:36.642188 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.478985 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:39 old-k8s-version-943255 kubelet[661]: E0414 11:39:39.192063 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.479803 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:42 old-k8s-version-943255 kubelet[661]: E0414 11:39:42.656633 661 pod_workers.go:191] Error syncing pod 70b78d06-fcec-4cd3-9143-9c2bd9176c52 ("storage-provisioner_kube-system(70b78d06-fcec-4cd3-9143-9c2bd9176c52)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(70b78d06-fcec-4cd3-9143-9c2bd9176c52)"
W0414 11:44:37.480421 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:47 old-k8s-version-943255 kubelet[661]: E0414 11:39:47.688060 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.485301 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:54 old-k8s-version-943255 kubelet[661]: E0414 11:39:54.213195 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.485704 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:55 old-k8s-version-943255 kubelet[661]: E0414 11:39:55.053328 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.486062 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:09 old-k8s-version-943255 kubelet[661]: E0414 11:40:09.192116 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.486679 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:10 old-k8s-version-943255 kubelet[661]: E0414 11:40:10.769424 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.487042 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:15 old-k8s-version-943255 kubelet[661]: E0414 11:40:15.053424 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.487261 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:24 old-k8s-version-943255 kubelet[661]: E0414 11:40:24.192449 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.487611 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:27 old-k8s-version-943255 kubelet[661]: E0414 11:40:27.191843 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.490088 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:38 old-k8s-version-943255 kubelet[661]: E0414 11:40:38.205282 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.490441 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:39 old-k8s-version-943255 kubelet[661]: E0414 11:40:39.191989 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.490650 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:49 old-k8s-version-943255 kubelet[661]: E0414 11:40:49.192230 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.491273 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:53 old-k8s-version-943255 kubelet[661]: E0414 11:40:53.882336 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.491633 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:55 old-k8s-version-943255 kubelet[661]: E0414 11:40:55.053117 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.491842 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:02 old-k8s-version-943255 kubelet[661]: E0414 11:41:02.195946 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.492237 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:06 old-k8s-version-943255 kubelet[661]: E0414 11:41:06.191978 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.492502 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:13 old-k8s-version-943255 kubelet[661]: E0414 11:41:13.192235 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.492862 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:21 old-k8s-version-943255 kubelet[661]: E0414 11:41:21.191820 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.493305 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:25 old-k8s-version-943255 kubelet[661]: E0414 11:41:25.192141 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.493673 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:34 old-k8s-version-943255 kubelet[661]: E0414 11:41:34.191786 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.493892 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:39 old-k8s-version-943255 kubelet[661]: E0414 11:41:39.192207 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.494265 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:47 old-k8s-version-943255 kubelet[661]: E0414 11:41:47.191837 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.494510 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:52 old-k8s-version-943255 kubelet[661]: E0414 11:41:52.194045 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.494865 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:00 old-k8s-version-943255 kubelet[661]: E0414 11:42:00.196599 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.497330 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:04 old-k8s-version-943255 kubelet[661]: E0414 11:42:04.200603 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:37.497985 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:16 old-k8s-version-943255 kubelet[661]: E0414 11:42:16.072747 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.498194 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:18 old-k8s-version-943255 kubelet[661]: E0414 11:42:18.200708 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.498551 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:25 old-k8s-version-943255 kubelet[661]: E0414 11:42:25.053597 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.498765 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:30 old-k8s-version-943255 kubelet[661]: E0414 11:42:30.192906 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.499164 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:37 old-k8s-version-943255 kubelet[661]: E0414 11:42:37.192466 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.499401 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:41 old-k8s-version-943255 kubelet[661]: E0414 11:42:41.192467 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.499793 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:49 old-k8s-version-943255 kubelet[661]: E0414 11:42:49.191798 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.500002 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:52 old-k8s-version-943255 kubelet[661]: E0414 11:42:52.192457 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.500398 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:01 old-k8s-version-943255 kubelet[661]: E0414 11:43:01.191833 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.500634 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:04 old-k8s-version-943255 kubelet[661]: E0414 11:43:04.193465 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.500989 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:13 old-k8s-version-943255 kubelet[661]: E0414 11:43:13.191790 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.501207 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:19 old-k8s-version-943255 kubelet[661]: E0414 11:43:19.192065 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.501566 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:25 old-k8s-version-943255 kubelet[661]: E0414 11:43:25.191821 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.501786 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:33 old-k8s-version-943255 kubelet[661]: E0414 11:43:33.192850 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.502138 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:39 old-k8s-version-943255 kubelet[661]: E0414 11:43:39.191923 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.502347 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:48 old-k8s-version-943255 kubelet[661]: E0414 11:43:48.192882 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.502694 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:54 old-k8s-version-943255 kubelet[661]: E0414 11:43:54.192005 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.502902 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:59 old-k8s-version-943255 kubelet[661]: E0414 11:43:59.193181 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.503257 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:05 old-k8s-version-943255 kubelet[661]: E0414 11:44:05.192322 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.503464 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:11 old-k8s-version-943255 kubelet[661]: E0414 11:44:11.192365 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.503864 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.504075 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.504438 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.504652 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0414 11:44:37.504678 808013 logs.go:123] Gathering logs for kube-apiserver [461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42] ...
I0414 11:44:37.504708 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:44:37.571239 808013 logs.go:123] Gathering logs for coredns [33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507] ...
I0414 11:44:37.571325 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:44:37.625288 808013 logs.go:123] Gathering logs for kube-scheduler [f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993] ...
I0414 11:44:37.625358 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:44:37.681927 808013 logs.go:123] Gathering logs for kindnet [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d] ...
I0414 11:44:37.681964 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d"
I0414 11:44:37.744768 808013 logs.go:123] Gathering logs for kubernetes-dashboard [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566] ...
I0414 11:44:37.744804 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566"
I0414 11:44:37.801721 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:37.801748 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0414 11:44:37.801858 808013 out.go:270] X Problems detected in kubelet:
W0414 11:44:37.801875 808013 out.go:270] Apr 14 11:44:11 old-k8s-version-943255 kubelet[661]: E0414 11:44:11.192365 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.801882 808013 out.go:270] Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.801891 808013 out.go:270] Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:37.802008 808013 out.go:270] Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:37.802018 808013 out.go:270] Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0414 11:44:37.802024 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:37.802031 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:44:39.259258 818978 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20534-594855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-680698:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir: (5.522058698s)
I0414 11:44:39.259290 818978 kic.go:203] duration metric: took 5.522212201s to extract preloaded images to volume ...
W0414 11:44:39.259441 818978 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0414 11:44:39.259558 818978 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0414 11:44:39.326659 818978 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-680698 --name embed-certs-680698 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-680698 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-680698 --network embed-certs-680698 --ip 192.168.85.2 --volume embed-certs-680698:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a
I0414 11:44:39.660275 818978 cli_runner.go:164] Run: docker container inspect embed-certs-680698 --format={{.State.Running}}
I0414 11:44:39.683766 818978 cli_runner.go:164] Run: docker container inspect embed-certs-680698 --format={{.State.Status}}
I0414 11:44:39.710046 818978 cli_runner.go:164] Run: docker exec embed-certs-680698 stat /var/lib/dpkg/alternatives/iptables
I0414 11:44:39.771940 818978 oci.go:144] the created container "embed-certs-680698" has a running status.
I0414 11:44:39.771974 818978 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20534-594855/.minikube/machines/embed-certs-680698/id_rsa...
I0414 11:44:40.307638 818978 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20534-594855/.minikube/machines/embed-certs-680698/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0414 11:44:40.336342 818978 cli_runner.go:164] Run: docker container inspect embed-certs-680698 --format={{.State.Status}}
I0414 11:44:40.360074 818978 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0414 11:44:40.360104 818978 kic_runner.go:114] Args: [docker exec --privileged embed-certs-680698 chown docker:docker /home/docker/.ssh/authorized_keys]
I0414 11:44:40.420643 818978 cli_runner.go:164] Run: docker container inspect embed-certs-680698 --format={{.State.Status}}
I0414 11:44:40.451520 818978 machine.go:93] provisionDockerMachine start ...
I0414 11:44:40.451626 818978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-680698
I0414 11:44:40.471912 818978 main.go:141] libmachine: Using SSH client type: native
I0414 11:44:40.472248 818978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33816 <nil> <nil>}
I0414 11:44:40.472264 818978 main.go:141] libmachine: About to run SSH command:
hostname
I0414 11:44:40.641813 818978 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-680698
I0414 11:44:40.641839 818978 ubuntu.go:169] provisioning hostname "embed-certs-680698"
I0414 11:44:40.641907 818978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-680698
I0414 11:44:40.665352 818978 main.go:141] libmachine: Using SSH client type: native
I0414 11:44:40.665675 818978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33816 <nil> <nil>}
I0414 11:44:40.665693 818978 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-680698 && echo "embed-certs-680698" | sudo tee /etc/hostname
I0414 11:44:40.811312 818978 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-680698
I0414 11:44:40.811463 818978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-680698
I0414 11:44:40.833711 818978 main.go:141] libmachine: Using SSH client type: native
I0414 11:44:40.834059 818978 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33816 <nil> <nil>}
I0414 11:44:40.834078 818978 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-680698' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-680698/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-680698' | sudo tee -a /etc/hosts;
fi
fi
I0414 11:44:40.961870 818978 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0414 11:44:40.961892 818978 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20534-594855/.minikube CaCertPath:/home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20534-594855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20534-594855/.minikube}
I0414 11:44:40.961927 818978 ubuntu.go:177] setting up certificates
I0414 11:44:40.961938 818978 provision.go:84] configureAuth start
I0414 11:44:40.962001 818978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-680698
I0414 11:44:40.984285 818978 provision.go:143] copyHostCerts
I0414 11:44:40.984357 818978 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-594855/.minikube/ca.pem, removing ...
I0414 11:44:40.984366 818978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-594855/.minikube/ca.pem
I0414 11:44:40.984455 818978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20534-594855/.minikube/ca.pem (1082 bytes)
I0414 11:44:40.984556 818978 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-594855/.minikube/cert.pem, removing ...
I0414 11:44:40.984561 818978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-594855/.minikube/cert.pem
I0414 11:44:40.984588 818978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20534-594855/.minikube/cert.pem (1123 bytes)
I0414 11:44:40.984652 818978 exec_runner.go:144] found /home/jenkins/minikube-integration/20534-594855/.minikube/key.pem, removing ...
I0414 11:44:40.984657 818978 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20534-594855/.minikube/key.pem
I0414 11:44:40.984680 818978 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20534-594855/.minikube/key.pem (1675 bytes)
I0414 11:44:40.984733 818978 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20534-594855/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca-key.pem org=jenkins.embed-certs-680698 san=[127.0.0.1 192.168.85.2 embed-certs-680698 localhost minikube]
I0414 11:44:41.414650 818978 provision.go:177] copyRemoteCerts
I0414 11:44:41.414728 818978 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0414 11:44:41.414776 818978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-680698
I0414 11:44:41.433313 818978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/embed-certs-680698/id_rsa Username:docker}
I0414 11:44:41.526904 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0414 11:44:41.552811 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0414 11:44:41.579049 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0414 11:44:41.605661 818978 provision.go:87] duration metric: took 643.709448ms to configureAuth
I0414 11:44:41.605688 818978 ubuntu.go:193] setting minikube options for container-runtime
I0414 11:44:41.605921 818978 config.go:182] Loaded profile config "embed-certs-680698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0414 11:44:41.605930 818978 machine.go:96] duration metric: took 1.154392417s to provisionDockerMachine
I0414 11:44:41.605937 818978 client.go:171] duration metric: took 8.597960822s to LocalClient.Create
I0414 11:44:41.605959 818978 start.go:167] duration metric: took 8.598053664s to libmachine.API.Create "embed-certs-680698"
I0414 11:44:41.605967 818978 start.go:293] postStartSetup for "embed-certs-680698" (driver="docker")
I0414 11:44:41.605976 818978 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0414 11:44:41.606027 818978 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0414 11:44:41.606070 818978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-680698
I0414 11:44:41.623232 818978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/embed-certs-680698/id_rsa Username:docker}
I0414 11:44:41.719844 818978 ssh_runner.go:195] Run: cat /etc/os-release
I0414 11:44:41.723815 818978 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0414 11:44:41.723859 818978 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0414 11:44:41.723871 818978 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0414 11:44:41.723879 818978 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0414 11:44:41.723895 818978 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-594855/.minikube/addons for local assets ...
I0414 11:44:41.723952 818978 filesync.go:126] Scanning /home/jenkins/minikube-integration/20534-594855/.minikube/files for local assets ...
I0414 11:44:41.724035 818978 filesync.go:149] local asset: /home/jenkins/minikube-integration/20534-594855/.minikube/files/etc/ssl/certs/6002272.pem -> 6002272.pem in /etc/ssl/certs
I0414 11:44:41.724140 818978 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0414 11:44:41.733341 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/files/etc/ssl/certs/6002272.pem --> /etc/ssl/certs/6002272.pem (1708 bytes)
I0414 11:44:41.759561 818978 start.go:296] duration metric: took 153.580322ms for postStartSetup
I0414 11:44:41.760149 818978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-680698
I0414 11:44:41.778385 818978 profile.go:143] Saving config to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/config.json ...
I0414 11:44:41.778675 818978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0414 11:44:41.778727 818978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-680698
I0414 11:44:41.797315 818978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/embed-certs-680698/id_rsa Username:docker}
I0414 11:44:41.887709 818978 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0414 11:44:41.893076 818978 start.go:128] duration metric: took 8.890127902s to createHost
I0414 11:44:41.893103 818978 start.go:83] releasing machines lock for "embed-certs-680698", held for 8.890284302s
I0414 11:44:41.893194 818978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-680698
I0414 11:44:41.917218 818978 ssh_runner.go:195] Run: cat /version.json
I0414 11:44:41.917263 818978 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0414 11:44:41.917287 818978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-680698
I0414 11:44:41.917333 818978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-680698
I0414 11:44:41.937139 818978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/embed-certs-680698/id_rsa Username:docker}
I0414 11:44:41.945912 818978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33816 SSHKeyPath:/home/jenkins/minikube-integration/20534-594855/.minikube/machines/embed-certs-680698/id_rsa Username:docker}
I0414 11:44:42.032227 818978 ssh_runner.go:195] Run: systemctl --version
I0414 11:44:42.174861 818978 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0414 11:44:42.181817 818978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0414 11:44:42.226194 818978 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0414 11:44:42.226371 818978 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0414 11:44:42.272783 818978 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0414 11:44:42.272822 818978 start.go:495] detecting cgroup driver to use...
I0414 11:44:42.272872 818978 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0414 11:44:42.272944 818978 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0414 11:44:42.291428 818978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0414 11:44:42.306873 818978 docker.go:217] disabling cri-docker service (if available) ...
I0414 11:44:42.307011 818978 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0414 11:44:42.327133 818978 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0414 11:44:42.346862 818978 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0414 11:44:42.484525 818978 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0414 11:44:42.584776 818978 docker.go:233] disabling docker service ...
I0414 11:44:42.584856 818978 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0414 11:44:42.610005 818978 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0414 11:44:42.623965 818978 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0414 11:44:42.750469 818978 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0414 11:44:42.834809 818978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0414 11:44:42.846562 818978 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0414 11:44:42.862569 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0414 11:44:42.873110 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0414 11:44:42.883366 818978 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0414 11:44:42.883490 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0414 11:44:42.894143 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 11:44:42.907245 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0414 11:44:42.916757 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0414 11:44:42.926379 818978 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0414 11:44:42.935798 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0414 11:44:42.946366 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0414 11:44:42.956913 818978 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0414 11:44:42.967406 818978 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0414 11:44:42.976097 818978 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0414 11:44:42.984804 818978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 11:44:43.074951 818978 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0414 11:44:43.201816 818978 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0414 11:44:43.201930 818978 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0414 11:44:43.206172 818978 start.go:563] Will wait 60s for crictl version
I0414 11:44:43.206263 818978 ssh_runner.go:195] Run: which crictl
I0414 11:44:43.209895 818978 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0414 11:44:43.251861 818978 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0414 11:44:43.251985 818978 ssh_runner.go:195] Run: containerd --version
I0414 11:44:43.276304 818978 ssh_runner.go:195] Run: containerd --version
I0414 11:44:43.302635 818978 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.27 ...
I0414 11:44:43.305577 818978 cli_runner.go:164] Run: docker network inspect embed-certs-680698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0414 11:44:43.320281 818978 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0414 11:44:43.324075 818978 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 11:44:43.336172 818978 kubeadm.go:883] updating cluster {Name:embed-certs-680698 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-680698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0414 11:44:43.336284 818978 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0414 11:44:43.336344 818978 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 11:44:43.377204 818978 containerd.go:627] all images are preloaded for containerd runtime.
I0414 11:44:43.377228 818978 containerd.go:534] Images already preloaded, skipping extraction
I0414 11:44:43.377287 818978 ssh_runner.go:195] Run: sudo crictl images --output json
I0414 11:44:43.415320 818978 containerd.go:627] all images are preloaded for containerd runtime.
I0414 11:44:43.415344 818978 cache_images.go:84] Images are preloaded, skipping loading
I0414 11:44:43.415352 818978 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
I0414 11:44:43.415442 818978 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-680698 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-680698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0414 11:44:43.415511 818978 ssh_runner.go:195] Run: sudo crictl info
I0414 11:44:43.453952 818978 cni.go:84] Creating CNI manager for ""
I0414 11:44:43.453983 818978 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 11:44:43.453994 818978 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0414 11:44:43.454018 818978 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-680698 NodeName:embed-certs-680698 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0414 11:44:43.454130 818978 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-680698"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0414 11:44:43.454205 818978 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0414 11:44:43.465329 818978 binaries.go:44] Found k8s binaries, skipping transfer
I0414 11:44:43.465407 818978 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0414 11:44:43.475174 818978 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0414 11:44:43.494223 818978 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0414 11:44:43.515412 818978 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0414 11:44:43.534497 818978 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0414 11:44:43.538027 818978 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0414 11:44:43.549039 818978 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0414 11:44:43.630345 818978 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0414 11:44:43.644307 818978 certs.go:68] Setting up /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698 for IP: 192.168.85.2
I0414 11:44:43.644329 818978 certs.go:194] generating shared ca certs ...
I0414 11:44:43.644345 818978 certs.go:226] acquiring lock for ca certs: {Name:mkc72929fdde159a4ce614d0ceb68f60716f5790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:44:43.644522 818978 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20534-594855/.minikube/ca.key
I0414 11:44:43.644572 818978 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20534-594855/.minikube/proxy-client-ca.key
I0414 11:44:43.644585 818978 certs.go:256] generating profile certs ...
I0414 11:44:43.644644 818978 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/client.key
I0414 11:44:43.644668 818978 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/client.crt with IP's: []
I0414 11:44:44.085659 818978 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/client.crt ...
I0414 11:44:44.085698 818978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/client.crt: {Name:mk780a03bc4b13067f4b7fd783759097f13cce89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:44:44.086014 818978 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/client.key ...
I0414 11:44:44.086030 818978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/client.key: {Name:mkd4dc780a4531b0defac099246457024829c1d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:44:44.086569 818978 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.key.2ff3eea1
I0414 11:44:44.086595 818978 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.crt.2ff3eea1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0414 11:44:44.613082 818978 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.crt.2ff3eea1 ...
I0414 11:44:44.613113 818978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.crt.2ff3eea1: {Name:mkdc82dc204c11c100efba10e89ee6ac2334d894 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:44:44.613839 818978 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.key.2ff3eea1 ...
I0414 11:44:44.613859 818978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.key.2ff3eea1: {Name:mkb3688f6bca7d6bc659962ff71b551056aec078 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:44:44.613965 818978 certs.go:381] copying /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.crt.2ff3eea1 -> /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.crt
I0414 11:44:44.614044 818978 certs.go:385] copying /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.key.2ff3eea1 -> /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.key
I0414 11:44:44.614105 818978 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/proxy-client.key
I0414 11:44:44.614126 818978 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/proxy-client.crt with IP's: []
I0414 11:44:44.685487 818978 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/proxy-client.crt ...
I0414 11:44:44.685513 818978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/proxy-client.crt: {Name:mkf6a3925425c275652f2c9e6ab88e2c03c24a40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:44:44.685701 818978 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/proxy-client.key ...
I0414 11:44:44.685715 818978 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/proxy-client.key: {Name:mkfae83547d15261b767924b7bbc5e806083033f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0414 11:44:44.686402 818978 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/600227.pem (1338 bytes)
W0414 11:44:44.686465 818978 certs.go:480] ignoring /home/jenkins/minikube-integration/20534-594855/.minikube/certs/600227_empty.pem, impossibly tiny 0 bytes
I0414 11:44:44.686484 818978 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca-key.pem (1675 bytes)
I0414 11:44:44.686513 818978 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/ca.pem (1082 bytes)
I0414 11:44:44.686570 818978 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/cert.pem (1123 bytes)
I0414 11:44:44.686621 818978 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/certs/key.pem (1675 bytes)
I0414 11:44:44.686686 818978 certs.go:484] found cert: /home/jenkins/minikube-integration/20534-594855/.minikube/files/etc/ssl/certs/6002272.pem (1708 bytes)
I0414 11:44:44.687241 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0414 11:44:44.719421 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0414 11:44:44.746063 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0414 11:44:44.771766 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0414 11:44:44.797037 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0414 11:44:44.822904 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0414 11:44:44.848505 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0414 11:44:44.872768 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/profiles/embed-certs-680698/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0414 11:44:44.902092 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/certs/600227.pem --> /usr/share/ca-certificates/600227.pem (1338 bytes)
I0414 11:44:44.928404 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/files/etc/ssl/certs/6002272.pem --> /usr/share/ca-certificates/6002272.pem (1708 bytes)
I0414 11:44:44.954108 818978 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20534-594855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0414 11:44:44.980797 818978 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0414 11:44:44.999501 818978 ssh_runner.go:195] Run: openssl version
I0414 11:44:45.021116 818978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/600227.pem && ln -fs /usr/share/ca-certificates/600227.pem /etc/ssl/certs/600227.pem"
I0414 11:44:45.058179 818978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/600227.pem
I0414 11:44:45.063523 818978 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 14 11:01 /usr/share/ca-certificates/600227.pem
I0414 11:44:45.063597 818978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/600227.pem
I0414 11:44:45.073793 818978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/600227.pem /etc/ssl/certs/51391683.0"
I0414 11:44:45.086974 818978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6002272.pem && ln -fs /usr/share/ca-certificates/6002272.pem /etc/ssl/certs/6002272.pem"
I0414 11:44:45.102566 818978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6002272.pem
I0414 11:44:45.108096 818978 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 14 11:01 /usr/share/ca-certificates/6002272.pem
I0414 11:44:45.108175 818978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6002272.pem
I0414 11:44:45.118283 818978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6002272.pem /etc/ssl/certs/3ec20f2e.0"
I0414 11:44:45.131214 818978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0414 11:44:45.147905 818978 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0414 11:44:45.153888 818978 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 14 10:53 /usr/share/ca-certificates/minikubeCA.pem
I0414 11:44:45.154053 818978 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0414 11:44:45.166901 818978 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0414 11:44:45.182340 818978 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0414 11:44:45.188192 818978 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0414 11:44:45.188292 818978 kubeadm.go:392] StartCluster: {Name:embed-certs-680698 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-680698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0414 11:44:45.188392 818978 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0414 11:44:45.188470 818978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0414 11:44:45.263448 818978 cri.go:89] found id: ""
I0414 11:44:45.263638 818978 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0414 11:44:45.279655 818978 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0414 11:44:45.296498 818978 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0414 11:44:45.296799 818978 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0414 11:44:45.321240 818978 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0414 11:44:45.321331 818978 kubeadm.go:157] found existing configuration files:
I0414 11:44:45.321875 818978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0414 11:44:45.341032 818978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0414 11:44:45.341148 818978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0414 11:44:45.353869 818978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0414 11:44:45.366643 818978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0414 11:44:45.366770 818978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0414 11:44:45.376436 818978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0414 11:44:45.389386 818978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0414 11:44:45.389489 818978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0414 11:44:45.399200 818978 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0414 11:44:45.409184 818978 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0414 11:44:45.409249 818978 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0414 11:44:45.419132 818978 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0414 11:44:45.466050 818978 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0414 11:44:45.466165 818978 kubeadm.go:310] [preflight] Running pre-flight checks
I0414 11:44:45.487500 818978 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0414 11:44:45.487698 818978 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1081-aws[0m
I0414 11:44:45.487748 818978 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0414 11:44:45.487817 818978 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0414 11:44:45.487887 818978 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0414 11:44:45.487975 818978 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0414 11:44:45.488055 818978 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0414 11:44:45.488142 818978 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0414 11:44:45.488224 818978 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0414 11:44:45.488301 818978 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0414 11:44:45.488381 818978 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0414 11:44:45.488476 818978 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0414 11:44:45.558804 818978 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0414 11:44:45.558994 818978 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0414 11:44:45.559198 818978 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0414 11:44:45.566221 818978 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0414 11:44:45.572993 818978 out.go:235] - Generating certificates and keys ...
I0414 11:44:45.573113 818978 kubeadm.go:310] [certs] Using existing ca certificate authority
I0414 11:44:45.573236 818978 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0414 11:44:46.498002 818978 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0414 11:44:47.082859 818978 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0414 11:44:47.666197 818978 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0414 11:44:47.803431 808013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0414 11:44:47.816139 808013 api_server.go:72] duration metric: took 5m55.681082464s to wait for apiserver process to appear ...
I0414 11:44:47.816174 808013 api_server.go:88] waiting for apiserver healthz status ...
I0414 11:44:47.816213 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0414 11:44:47.816268 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0414 11:44:47.872941 808013 cri.go:89] found id: "e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8"
I0414 11:44:47.872961 808013 cri.go:89] found id: "461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:44:47.872968 808013 cri.go:89] found id: ""
I0414 11:44:47.872975 808013 logs.go:282] 2 containers: [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8 461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42]
I0414 11:44:47.873036 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:47.877393 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:47.881537 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0414 11:44:47.881608 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0414 11:44:47.951247 808013 cri.go:89] found id: "22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29"
I0414 11:44:47.951266 808013 cri.go:89] found id: "dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:44:47.951271 808013 cri.go:89] found id: ""
I0414 11:44:47.951278 808013 logs.go:282] 2 containers: [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29 dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c]
I0414 11:44:47.951339 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:47.955570 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:47.960046 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0414 11:44:47.960114 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0414 11:44:48.016776 808013 cri.go:89] found id: "0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1"
I0414 11:44:48.016798 808013 cri.go:89] found id: "33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:44:48.016803 808013 cri.go:89] found id: ""
I0414 11:44:48.016810 808013 logs.go:282] 2 containers: [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1 33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507]
I0414 11:44:48.016869 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.021594 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.026096 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0414 11:44:48.026187 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0414 11:44:48.078498 808013 cri.go:89] found id: "1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e"
I0414 11:44:48.078570 808013 cri.go:89] found id: "f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:44:48.078588 808013 cri.go:89] found id: ""
I0414 11:44:48.078612 808013 logs.go:282] 2 containers: [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993]
I0414 11:44:48.078706 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.083758 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.088172 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0414 11:44:48.088249 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0414 11:44:48.137517 808013 cri.go:89] found id: "79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803"
I0414 11:44:48.137536 808013 cri.go:89] found id: "13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:44:48.137541 808013 cri.go:89] found id: ""
I0414 11:44:48.137549 808013 logs.go:282] 2 containers: [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803 13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1]
I0414 11:44:48.137605 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.142443 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.146807 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0414 11:44:48.146876 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0414 11:44:48.211322 808013 cri.go:89] found id: "2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6"
I0414 11:44:48.211341 808013 cri.go:89] found id: "d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:44:48.211346 808013 cri.go:89] found id: ""
I0414 11:44:48.211353 808013 logs.go:282] 2 containers: [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6 d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6]
I0414 11:44:48.211419 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.215344 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.220867 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0414 11:44:48.220986 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0414 11:44:48.290792 808013 cri.go:89] found id: "54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d"
I0414 11:44:48.290868 808013 cri.go:89] found id: "e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:44:48.290888 808013 cri.go:89] found id: ""
I0414 11:44:48.290909 808013 logs.go:282] 2 containers: [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265]
I0414 11:44:48.290995 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.294782 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.298559 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0414 11:44:48.298692 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0414 11:44:48.342392 808013 cri.go:89] found id: "e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa"
I0414 11:44:48.342467 808013 cri.go:89] found id: "daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9"
I0414 11:44:48.342485 808013 cri.go:89] found id: ""
I0414 11:44:48.342507 808013 logs.go:282] 2 containers: [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9]
I0414 11:44:48.342598 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.346339 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.349761 808013 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0414 11:44:48.349914 808013 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0414 11:44:48.401259 808013 cri.go:89] found id: "59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566"
I0414 11:44:48.401283 808013 cri.go:89] found id: ""
I0414 11:44:48.401291 808013 logs.go:282] 1 containers: [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566]
I0414 11:44:48.401374 808013 ssh_runner.go:195] Run: which crictl
I0414 11:44:48.405041 808013 logs.go:123] Gathering logs for coredns [33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507] ...
I0414 11:44:48.405061 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507"
I0414 11:44:48.453766 808013 logs.go:123] Gathering logs for kube-controller-manager [d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6] ...
I0414 11:44:48.453818 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6"
I0414 11:44:48.531990 808013 logs.go:123] Gathering logs for kindnet [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d] ...
I0414 11:44:48.532069 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d"
I0414 11:44:48.612437 808013 logs.go:123] Gathering logs for storage-provisioner [daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9] ...
I0414 11:44:48.612515 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9"
I0414 11:44:48.666338 808013 logs.go:123] Gathering logs for kube-apiserver [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8] ...
I0414 11:44:48.666414 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8"
I0414 11:44:47.867784 818978 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0414 11:44:48.196845 818978 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0414 11:44:48.197398 818978 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-680698 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0414 11:44:49.074426 818978 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0414 11:44:49.074738 818978 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-680698 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0414 11:44:49.404244 818978 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0414 11:44:49.637560 818978 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0414 11:44:49.905414 818978 kubeadm.go:310] [certs] Generating "sa" key and public key
I0414 11:44:49.905754 818978 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0414 11:44:50.134680 818978 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0414 11:44:50.768250 818978 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0414 11:44:51.039108 818978 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0414 11:44:51.424572 818978 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0414 11:44:51.971790 818978 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0414 11:44:51.974034 818978 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0414 11:44:51.977799 818978 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0414 11:44:51.981075 818978 out.go:235] - Booting up control plane ...
I0414 11:44:51.981210 818978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0414 11:44:51.981300 818978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0414 11:44:51.981931 818978 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0414 11:44:51.993525 818978 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0414 11:44:52.000548 818978 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0414 11:44:52.000894 818978 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0414 11:44:52.108079 818978 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0414 11:44:52.108202 818978 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I0414 11:44:48.735137 808013 logs.go:123] Gathering logs for kube-apiserver [461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42] ...
I0414 11:44:48.735212 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42"
I0414 11:44:48.803249 808013 logs.go:123] Gathering logs for etcd [dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c] ...
I0414 11:44:48.803325 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c"
I0414 11:44:48.848323 808013 logs.go:123] Gathering logs for kube-proxy [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803] ...
I0414 11:44:48.848474 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803"
I0414 11:44:48.915130 808013 logs.go:123] Gathering logs for kube-controller-manager [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6] ...
I0414 11:44:48.915200 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6"
I0414 11:44:49.022893 808013 logs.go:123] Gathering logs for kindnet [e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265] ...
I0414 11:44:49.022926 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265"
I0414 11:44:49.095606 808013 logs.go:123] Gathering logs for kubernetes-dashboard [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566] ...
I0414 11:44:49.095676 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566"
I0414 11:44:49.150412 808013 logs.go:123] Gathering logs for kubelet ...
I0414 11:44:49.150492 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0414 11:44:49.229956 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.671664 661 reflector.go:138] object-"kube-system"/"kube-proxy-token-f29ww": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-f29ww" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.230231 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.678711 661 reflector.go:138] object-"kube-system"/"kindnet-token-srz8d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-srz8d" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.230461 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.679945 661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.230689 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.681204 661 reflector.go:138] object-"kube-system"/"coredns-token-lkmz8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-lkmz8" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.230907 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.681667 661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.231134 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.689142 661 reflector.go:138] object-"default"/"default-token-6r6dc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6r6dc" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.234893 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.710198 661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cbmxs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cbmxs" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.235118 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:09 old-k8s-version-943255 kubelet[661]: E0414 11:39:09.823295 661 reflector.go:138] object-"kube-system"/"metrics-server-token-9hmzl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-9hmzl" is forbidden: User "system:node:old-k8s-version-943255" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-943255' and this object
W0414 11:44:49.243126 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:12 old-k8s-version-943255 kubelet[661]: E0414 11:39:12.493034 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.243360 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:12 old-k8s-version-943255 kubelet[661]: E0414 11:39:12.517194 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.246196 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:27 old-k8s-version-943255 kubelet[661]: E0414 11:39:27.201995 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.248284 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:34 old-k8s-version-943255 kubelet[661]: E0414 11:39:34.614078 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.248631 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:35 old-k8s-version-943255 kubelet[661]: E0414 11:39:35.636481 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.249036 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:36 old-k8s-version-943255 kubelet[661]: E0414 11:39:36.642188 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.249240 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:39 old-k8s-version-943255 kubelet[661]: E0414 11:39:39.192063 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.250082 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:42 old-k8s-version-943255 kubelet[661]: E0414 11:39:42.656633 661 pod_workers.go:191] Error syncing pod 70b78d06-fcec-4cd3-9143-9c2bd9176c52 ("storage-provisioner_kube-system(70b78d06-fcec-4cd3-9143-9c2bd9176c52)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(70b78d06-fcec-4cd3-9143-9c2bd9176c52)"
W0414 11:44:49.250700 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:47 old-k8s-version-943255 kubelet[661]: E0414 11:39:47.688060 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.253604 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:54 old-k8s-version-943255 kubelet[661]: E0414 11:39:54.213195 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.253942 808013 logs.go:138] Found kubelet problem: Apr 14 11:39:55 old-k8s-version-943255 kubelet[661]: E0414 11:39:55.053328 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.254482 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:09 old-k8s-version-943255 kubelet[661]: E0414 11:40:09.192116 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.255118 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:10 old-k8s-version-943255 kubelet[661]: E0414 11:40:10.769424 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.255538 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:15 old-k8s-version-943255 kubelet[661]: E0414 11:40:15.053424 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.255724 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:24 old-k8s-version-943255 kubelet[661]: E0414 11:40:24.192449 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.256047 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:27 old-k8s-version-943255 kubelet[661]: E0414 11:40:27.191843 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.258929 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:38 old-k8s-version-943255 kubelet[661]: E0414 11:40:38.205282 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.259299 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:39 old-k8s-version-943255 kubelet[661]: E0414 11:40:39.191989 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.259508 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:49 old-k8s-version-943255 kubelet[661]: E0414 11:40:49.192230 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.260116 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:53 old-k8s-version-943255 kubelet[661]: E0414 11:40:53.882336 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.260460 808013 logs.go:138] Found kubelet problem: Apr 14 11:40:55 old-k8s-version-943255 kubelet[661]: E0414 11:40:55.053117 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.260682 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:02 old-k8s-version-943255 kubelet[661]: E0414 11:41:02.195946 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.261040 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:06 old-k8s-version-943255 kubelet[661]: E0414 11:41:06.191978 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.261245 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:13 old-k8s-version-943255 kubelet[661]: E0414 11:41:13.192235 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.261591 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:21 old-k8s-version-943255 kubelet[661]: E0414 11:41:21.191820 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.261806 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:25 old-k8s-version-943255 kubelet[661]: E0414 11:41:25.192141 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.262151 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:34 old-k8s-version-943255 kubelet[661]: E0414 11:41:34.191786 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.262353 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:39 old-k8s-version-943255 kubelet[661]: E0414 11:41:39.192207 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.262702 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:47 old-k8s-version-943255 kubelet[661]: E0414 11:41:47.191837 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.262902 808013 logs.go:138] Found kubelet problem: Apr 14 11:41:52 old-k8s-version-943255 kubelet[661]: E0414 11:41:52.194045 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.263247 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:00 old-k8s-version-943255 kubelet[661]: E0414 11:42:00.196599 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.265879 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:04 old-k8s-version-943255 kubelet[661]: E0414 11:42:04.200603 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0414 11:44:49.266542 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:16 old-k8s-version-943255 kubelet[661]: E0414 11:42:16.072747 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.266749 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:18 old-k8s-version-943255 kubelet[661]: E0414 11:42:18.200708 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.267094 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:25 old-k8s-version-943255 kubelet[661]: E0414 11:42:25.053597 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.267293 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:30 old-k8s-version-943255 kubelet[661]: E0414 11:42:30.192906 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.267657 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:37 old-k8s-version-943255 kubelet[661]: E0414 11:42:37.192466 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.267861 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:41 old-k8s-version-943255 kubelet[661]: E0414 11:42:41.192467 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.268203 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:49 old-k8s-version-943255 kubelet[661]: E0414 11:42:49.191798 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.268402 808013 logs.go:138] Found kubelet problem: Apr 14 11:42:52 old-k8s-version-943255 kubelet[661]: E0414 11:42:52.192457 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.268863 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:01 old-k8s-version-943255 kubelet[661]: E0414 11:43:01.191833 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.269073 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:04 old-k8s-version-943255 kubelet[661]: E0414 11:43:04.193465 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.269419 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:13 old-k8s-version-943255 kubelet[661]: E0414 11:43:13.191790 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.269619 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:19 old-k8s-version-943255 kubelet[661]: E0414 11:43:19.192065 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.269970 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:25 old-k8s-version-943255 kubelet[661]: E0414 11:43:25.191821 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.270175 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:33 old-k8s-version-943255 kubelet[661]: E0414 11:43:33.192850 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.270517 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:39 old-k8s-version-943255 kubelet[661]: E0414 11:43:39.191923 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.270717 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:48 old-k8s-version-943255 kubelet[661]: E0414 11:43:48.192882 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.271077 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:54 old-k8s-version-943255 kubelet[661]: E0414 11:43:54.192005 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.271281 808013 logs.go:138] Found kubelet problem: Apr 14 11:43:59 old-k8s-version-943255 kubelet[661]: E0414 11:43:59.193181 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.271688 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:05 old-k8s-version-943255 kubelet[661]: E0414 11:44:05.192322 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.271908 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:11 old-k8s-version-943255 kubelet[661]: E0414 11:44:11.192365 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.272252 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.272455 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.272804 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.273004 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.273348 808013 logs.go:138] Found kubelet problem: Apr 14 11:44:41 old-k8s-version-943255 kubelet[661]: E0414 11:44:41.192234 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
I0414 11:44:49.273379 808013 logs.go:123] Gathering logs for describe nodes ...
I0414 11:44:49.273408 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0414 11:44:49.445671 808013 logs.go:123] Gathering logs for coredns [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1] ...
I0414 11:44:49.445743 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1"
I0414 11:44:49.499885 808013 logs.go:123] Gathering logs for kube-proxy [13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1] ...
I0414 11:44:49.499954 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1"
I0414 11:44:49.558047 808013 logs.go:123] Gathering logs for containerd ...
I0414 11:44:49.558124 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0414 11:44:49.621804 808013 logs.go:123] Gathering logs for container status ...
I0414 11:44:49.621880 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0414 11:44:49.685699 808013 logs.go:123] Gathering logs for dmesg ...
I0414 11:44:49.685873 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0414 11:44:49.708488 808013 logs.go:123] Gathering logs for kube-scheduler [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e] ...
I0414 11:44:49.708560 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e"
I0414 11:44:49.761525 808013 logs.go:123] Gathering logs for kube-scheduler [f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993] ...
I0414 11:44:49.761595 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993"
I0414 11:44:49.816691 808013 logs.go:123] Gathering logs for storage-provisioner [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa] ...
I0414 11:44:49.816767 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa"
I0414 11:44:49.871098 808013 logs.go:123] Gathering logs for etcd [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29] ...
I0414 11:44:49.871168 808013 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29"
I0414 11:44:49.957340 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:49.957368 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0414 11:44:49.957436 808013 out.go:270] X Problems detected in kubelet:
W0414 11:44:49.957445 808013 out.go:270] Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.957453 808013 out.go:270] Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.957460 808013 out.go:270] Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
W0414 11:44:49.957467 808013 out.go:270] Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0414 11:44:49.957542 808013 out.go:270] Apr 14 11:44:41 old-k8s-version-943255 kubelet[661]: E0414 11:44:41.192234 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
I0414 11:44:49.957550 808013 out.go:358] Setting ErrFile to fd 2...
I0414 11:44:49.957557 808013 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0414 11:44:53.109066 818978 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001131978s
I0414 11:44:53.109164 818978 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0414 11:44:59.111324 818978 kubeadm.go:310] [api-check] The API server is healthy after 6.002407495s
I0414 11:44:59.149142 818978 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0414 11:44:59.164948 818978 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0414 11:44:59.190928 818978 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0414 11:44:59.191429 818978 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-680698 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0414 11:44:59.202836 818978 kubeadm.go:310] [bootstrap-token] Using token: kg4s4v.d30eepdqvj0q3gxy
I0414 11:44:59.959458 808013 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0414 11:44:59.970865 808013 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0414 11:44:59.974990 808013 out.go:201]
W0414 11:44:59.978044 808013 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0414 11:44:59.978134 808013 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0414 11:44:59.978193 808013 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0414 11:44:59.978233 808013 out.go:270] *
W0414 11:44:59.979165 808013 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0414 11:44:59.983029 808013 out.go:201]
I0414 11:44:59.205847 818978 out.go:235] - Configuring RBAC rules ...
I0414 11:44:59.205986 818978 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0414 11:44:59.211140 818978 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0414 11:44:59.219705 818978 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0414 11:44:59.223914 818978 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0414 11:44:59.228750 818978 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0414 11:44:59.235203 818978 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0414 11:44:59.519816 818978 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0414 11:44:59.950350 818978 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0414 11:45:00.592822 818978 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0414 11:45:00.592858 818978 kubeadm.go:310]
I0414 11:45:00.592921 818978 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0414 11:45:00.592939 818978 kubeadm.go:310]
I0414 11:45:00.593020 818978 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0414 11:45:00.593031 818978 kubeadm.go:310]
I0414 11:45:00.593059 818978 kubeadm.go:310] mkdir -p $HOME/.kube
I0414 11:45:00.599709 818978 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0414 11:45:00.599793 818978 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0414 11:45:00.599799 818978 kubeadm.go:310]
I0414 11:45:00.599854 818978 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0414 11:45:00.599866 818978 kubeadm.go:310]
I0414 11:45:00.599914 818978 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0414 11:45:00.599923 818978 kubeadm.go:310]
I0414 11:45:00.599975 818978 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0414 11:45:00.600054 818978 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0414 11:45:00.600126 818978 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0414 11:45:00.600139 818978 kubeadm.go:310]
I0414 11:45:00.600223 818978 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0414 11:45:00.600305 818978 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0414 11:45:00.600313 818978 kubeadm.go:310]
I0414 11:45:00.600396 818978 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kg4s4v.d30eepdqvj0q3gxy \
I0414 11:45:00.600502 818978 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:22ecde837dc4537a4ceb7238aee543ce73e2d995effa034a90e0e1ab45e4d9f0 \
I0414 11:45:00.607674 818978 kubeadm.go:310] --control-plane
I0414 11:45:00.607718 818978 kubeadm.go:310]
I0414 11:45:00.607931 818978 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0414 11:45:00.607946 818978 kubeadm.go:310]
I0414 11:45:00.611736 818978 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kg4s4v.d30eepdqvj0q3gxy \
I0414 11:45:00.611861 818978 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:22ecde837dc4537a4ceb7238aee543ce73e2d995effa034a90e0e1ab45e4d9f0
I0414 11:45:00.630731 818978 kubeadm.go:310] [WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
I0414 11:45:00.631076 818978 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1081-aws\n", err: exit status 1
I0414 11:45:00.632442 818978 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0414 11:45:00.632653 818978 cni.go:84] Creating CNI manager for ""
I0414 11:45:00.632665 818978 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0414 11:45:00.649583 818978 out.go:177] * Configuring CNI (Container Networking Interface) ...
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
f3fe1f3572c03 523cad1a4df73 6 seconds ago Exited dashboard-metrics-scraper 6 e9a4791ec4452 dashboard-metrics-scraper-8d5bb5db8-jpjgb
e91667bb9e4d3 ba04bb24b9575 5 minutes ago Running storage-provisioner 2 423b2106855db storage-provisioner
59b5daad610a9 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 a6fe2dc10abb5 kubernetes-dashboard-cd95d586-xntzk
0dc5268e64369 db91994f4ee8f 5 minutes ago Running coredns 1 ed34217a07d4e coredns-74ff55c5b-b6hq8
e4d8d84c967c0 1611cd07b61d5 5 minutes ago Running busybox 1 ca6aa6a43dd65 busybox
79c9b42c48cf9 25a5233254979 5 minutes ago Running kube-proxy 1 0dd36abac3925 kube-proxy-rhrdw
daf4175e5b7ab ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 423b2106855db storage-provisioner
54fee462d7048 ee75e27fff91c 5 minutes ago Running kindnet-cni 1 04c1389f6b73d kindnet-fv898
2a5a538079e46 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 1aac355b9eb0b kube-controller-manager-old-k8s-version-943255
1fccd9ab2f416 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 c916ba8394977 kube-scheduler-old-k8s-version-943255
e808a7edc1bef 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 b91e01e9fcbea kube-apiserver-old-k8s-version-943255
22d59329be272 05b738aa1bc63 6 minutes ago Running etcd 1 62c352b9c032a etcd-old-k8s-version-943255
b076f2385bb51 1611cd07b61d5 6 minutes ago Exited busybox 0 9346d4f4a47f4 busybox
33df1ca9d1f5d db91994f4ee8f 7 minutes ago Exited coredns 0 5d8b4e5a3c858 coredns-74ff55c5b-b6hq8
e3ab91cc88a5c ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 dbf21b3272b8c kindnet-fv898
13c73f9eba669 25a5233254979 8 minutes ago Exited kube-proxy 0 3306d22277985 kube-proxy-rhrdw
dfb87161c5c5f 05b738aa1bc63 8 minutes ago Exited etcd 0 af9f7228c6c3f etcd-old-k8s-version-943255
461bce20618e7 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 d2c785f3da3bc kube-apiserver-old-k8s-version-943255
d980a0c3a521a 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 8af0c81f4b002 kube-controller-manager-old-k8s-version-943255
f0cc66fe654db e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 c0925ae2464e5 kube-scheduler-old-k8s-version-943255
==> containerd <==
Apr 14 11:42:04 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:04.200149237Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 14 11:42:15 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:15.193641511Z" level=info msg="CreateContainer within sandbox \"e9a4791ec445280a9e44a4c23f8e7ce40f03f361e69f03e544b40eb1ee17fe83\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Apr 14 11:42:15 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:15.215691186Z" level=info msg="CreateContainer within sandbox \"e9a4791ec445280a9e44a4c23f8e7ce40f03f361e69f03e544b40eb1ee17fe83\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46\""
Apr 14 11:42:15 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:15.216399599Z" level=info msg="StartContainer for \"d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46\""
Apr 14 11:42:15 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:15.295386415Z" level=info msg="StartContainer for \"d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46\" returns successfully"
Apr 14 11:42:15 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:15.295593449Z" level=info msg="received exit event container_id:\"d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46\" id:\"d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46\" pid:3276 exit_status:255 exited_at:{seconds:1744630935 nanos:294108987}"
Apr 14 11:42:15 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:15.335379065Z" level=info msg="shim disconnected" id=d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46 namespace=k8s.io
Apr 14 11:42:15 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:15.335422314Z" level=warning msg="cleaning up after shim disconnected" id=d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46 namespace=k8s.io
Apr 14 11:42:15 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:15.335459541Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 14 11:42:16 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:16.079287494Z" level=info msg="RemoveContainer for \"f10391e31ef584b4e3e330d714dd768ed468d59ffa0c5b044a7e992a63c6608a\""
Apr 14 11:42:16 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:42:16.085418223Z" level=info msg="RemoveContainer for \"f10391e31ef584b4e3e330d714dd768ed468d59ffa0c5b044a7e992a63c6608a\" returns successfully"
Apr 14 11:44:49 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:49.196866524Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:49 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:49.215848384Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Apr 14 11:44:49 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:49.217997160Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 14 11:44:49 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:49.218021489Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.194550762Z" level=info msg="CreateContainer within sandbox \"e9a4791ec445280a9e44a4c23f8e7ce40f03f361e69f03e544b40eb1ee17fe83\" for container name:\"dashboard-metrics-scraper\" attempt:6"
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.226595077Z" level=info msg="CreateContainer within sandbox \"e9a4791ec445280a9e44a4c23f8e7ce40f03f361e69f03e544b40eb1ee17fe83\" for name:\"dashboard-metrics-scraper\" attempt:6 returns container id \"f3fe1f3572c032d372784638fc602d8ef649f2ab7feefa7bf203c9122260a145\""
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.227417486Z" level=info msg="StartContainer for \"f3fe1f3572c032d372784638fc602d8ef649f2ab7feefa7bf203c9122260a145\""
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.372687920Z" level=info msg="StartContainer for \"f3fe1f3572c032d372784638fc602d8ef649f2ab7feefa7bf203c9122260a145\" returns successfully"
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.372872102Z" level=info msg="received exit event container_id:\"f3fe1f3572c032d372784638fc602d8ef649f2ab7feefa7bf203c9122260a145\" id:\"f3fe1f3572c032d372784638fc602d8ef649f2ab7feefa7bf203c9122260a145\" pid:4150 exit_status:255 exited_at:{seconds:1744631096 nanos:362762223}"
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.413084940Z" level=info msg="shim disconnected" id=f3fe1f3572c032d372784638fc602d8ef649f2ab7feefa7bf203c9122260a145 namespace=k8s.io
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.413137806Z" level=warning msg="cleaning up after shim disconnected" id=f3fe1f3572c032d372784638fc602d8ef649f2ab7feefa7bf203c9122260a145 namespace=k8s.io
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.413183476Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.539324495Z" level=info msg="RemoveContainer for \"d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46\""
Apr 14 11:44:56 old-k8s-version-943255 containerd[566]: time="2025-04-14T11:44:56.545506385Z" level=info msg="RemoveContainer for \"d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46\" returns successfully"
==> coredns [0dc5268e643699428766a750f28dd24413e9c1379a724c6a12733bf9bf65c8e1] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:50022 - 40847 "HINFO IN 6490487655711766354.1282092218908823369. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013696835s
==> coredns [33df1ca9d1f5de23c3ebb4110e2f15f04455f6002c42d308dd69b327f0b59507] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:60157 - 9502 "HINFO IN 5508542050163006861.897983388254840484. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014414709s
==> describe nodes <==
Name: old-k8s-version-943255
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-943255
kubernetes.io/os=linux
minikube.k8s.io/commit=43cb59e6a4e9845c84b0379fb52045b7420d26a4
minikube.k8s.io/name=old-k8s-version-943255
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_14T11_36_30_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 14 Apr 2025 11:36:26 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-943255
AcquireTime: <unset>
RenewTime: Mon, 14 Apr 2025 11:44:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 14 Apr 2025 11:40:00 +0000 Mon, 14 Apr 2025 11:36:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 14 Apr 2025 11:40:00 +0000 Mon, 14 Apr 2025 11:36:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 14 Apr 2025 11:40:00 +0000 Mon, 14 Apr 2025 11:36:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 14 Apr 2025 11:40:00 +0000 Mon, 14 Apr 2025 11:36:45 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-943255
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: fa6e97d519954b899d35715e9fe04be2
System UUID: d681882a-b787-4229-93af-fcb62eb82611
Boot ID: 59456904-b420-460f-bf6e-42b382cceb7f
Kernel Version: 5.15.0-1081-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.27
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m42s
kube-system coredns-74ff55c5b-b6hq8 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m17s
kube-system etcd-old-k8s-version-943255 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m24s
kube-system kindnet-fv898 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m17s
kube-system kube-apiserver-old-k8s-version-943255 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kube-system kube-controller-manager-old-k8s-version-943255 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kube-system kube-proxy-rhrdw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m17s
kube-system kube-scheduler-old-k8s-version-943255 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kube-system metrics-server-9975d5f86-7rqxd 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m31s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m16s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-jpjgb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
kubernetes-dashboard kubernetes-dashboard-cd95d586-xntzk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m43s (x5 over 8m44s) kubelet Node old-k8s-version-943255 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m43s (x4 over 8m44s) kubelet Node old-k8s-version-943255 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m43s (x4 over 8m44s) kubelet Node old-k8s-version-943255 status is now: NodeHasSufficientPID
Normal Starting 8m25s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m25s kubelet Node old-k8s-version-943255 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m25s kubelet Node old-k8s-version-943255 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m25s kubelet Node old-k8s-version-943255 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m24s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m17s kubelet Node old-k8s-version-943255 status is now: NodeReady
Normal Starting 8m16s kube-proxy Starting kube-proxy.
Normal Starting 6m3s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m2s (x8 over 6m2s) kubelet Node old-k8s-version-943255 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m2s (x8 over 6m2s) kubelet Node old-k8s-version-943255 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m2s (x7 over 6m2s) kubelet Node old-k8s-version-943255 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m2s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m50s kube-proxy Starting kube-proxy.
==> dmesg <==
[Apr14 10:26] overlayfs: '/var/lib/containers/storage/overlay/l/N5ED6J4JNM6I34BFXJDCBYNTX3' not a directory
[ +0.017885] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
[ +0.033761] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
[ +0.764468] overlayfs: '/var/lib/containers/storage/overlay/l/N5ED6J4JNM6I34BFXJDCBYNTX3' not a directory
==> etcd [22d59329be2729230f45f19e03b62c9b7b86d70082fd6293ab1864c24801ae29] <==
2025-04-14 11:40:53.988681 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:41:03.988567 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:41:13.988615 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:41:23.988641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:41:33.988530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:41:43.988685 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:41:53.988541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:42:03.988604 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:42:13.988478 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:42:23.988524 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:42:33.989146 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:42:43.988618 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:42:53.988565 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:43:03.988715 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:43:13.988561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:43:23.988608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:43:33.989081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:43:43.988582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:43:53.988560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:44:03.988662 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:44:13.988573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:44:23.988530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:44:33.988654 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:44:43.988652 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:44:53.988800 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [dfb87161c5c5fc49006ebdce3f7601957819ec445b865541e146eca0679b5a2c] <==
2025-04-14 11:36:20.019170 I | embed: listening for peers on 192.168.76.2:2380
raft2025/04/14 11:36:20 INFO: ea7e25599daad906 is starting a new election at term 1
raft2025/04/14 11:36:20 INFO: ea7e25599daad906 became candidate at term 2
raft2025/04/14 11:36:20 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/04/14 11:36:20 INFO: ea7e25599daad906 became leader at term 2
raft2025/04/14 11:36:20 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-04-14 11:36:20.154594 I | etcdserver: published {Name:old-k8s-version-943255 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-04-14 11:36:20.154886 I | embed: ready to serve client requests
2025-04-14 11:36:20.156353 I | embed: serving client requests on 192.168.76.2:2379
2025-04-14 11:36:20.157056 I | etcdserver: setting up the initial cluster version to 3.4
2025-04-14 11:36:20.167495 I | embed: ready to serve client requests
2025-04-14 11:36:20.168935 N | etcdserver/membership: set the initial cluster version to 3.4
2025-04-14 11:36:20.169176 I | embed: serving client requests on 127.0.0.1:2379
2025-04-14 11:36:20.177865 I | etcdserver/api: enabled capabilities for version 3.4
2025-04-14 11:36:47.843090 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:36:51.656129 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:37:01.655973 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:37:11.655933 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:37:21.656102 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:37:31.655945 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:37:41.655984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:37:51.655926 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:38:01.656273 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:38:11.656141 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-14 11:38:21.656079 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
11:45:02 up 3:27, 0 users, load average: 3.59, 2.17, 2.45
Linux old-k8s-version-943255 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [54fee462d7048537e2d6e5b1ff04e1a355a5529a85df9db91a1a0c4fc0c0135d] <==
I0414 11:43:02.359897 1 main.go:301] handling current node
I0414 11:43:12.350562 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:43:12.350600 1 main.go:301] handling current node
I0414 11:43:22.355304 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:43:22.355341 1 main.go:301] handling current node
I0414 11:43:32.357885 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:43:32.358053 1 main.go:301] handling current node
I0414 11:43:42.359523 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:43:42.359560 1 main.go:301] handling current node
I0414 11:43:52.357046 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:43:52.357085 1 main.go:301] handling current node
I0414 11:44:02.358894 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:44:02.359038 1 main.go:301] handling current node
I0414 11:44:12.350552 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:44:12.350586 1 main.go:301] handling current node
I0414 11:44:22.357083 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:44:22.357117 1 main.go:301] handling current node
I0414 11:44:32.359936 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:44:32.360149 1 main.go:301] handling current node
I0414 11:44:42.357998 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:44:42.358032 1 main.go:301] handling current node
I0414 11:44:52.356208 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:44:52.356250 1 main.go:301] handling current node
I0414 11:45:02.354028 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:45:02.354071 1 main.go:301] handling current node
==> kindnet [e3ab91cc88a5cea8def919348a0aaefa8482399763cb8271c570074b36d6a265] <==
I0414 11:36:48.634879 1 controller.go:365] Waiting for informer caches to sync
I0414 11:36:48.634886 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0414 11:36:48.935102 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0414 11:36:48.935275 1 metrics.go:61] Registering metrics
I0414 11:36:48.935411 1 controller.go:401] Syncing nftables rules
I0414 11:36:58.641851 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:36:58.641893 1 main.go:301] handling current node
I0414 11:37:08.634293 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:37:08.634391 1 main.go:301] handling current node
I0414 11:37:18.637643 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:37:18.637750 1 main.go:301] handling current node
I0414 11:37:28.642497 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:37:28.642532 1 main.go:301] handling current node
I0414 11:37:38.643007 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:37:38.643040 1 main.go:301] handling current node
I0414 11:37:48.634851 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:37:48.634885 1 main.go:301] handling current node
I0414 11:37:58.634918 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:37:58.634955 1 main.go:301] handling current node
I0414 11:38:08.636036 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:38:08.636933 1 main.go:301] handling current node
I0414 11:38:18.635739 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:38:18.635769 1 main.go:301] handling current node
I0414 11:38:28.637890 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0414 11:38:28.637980 1 main.go:301] handling current node
==> kube-apiserver [461bce20618e7e65ba72755643928d40207b8ccf6203f0954e747fd82c980d42] <==
I0414 11:36:27.283463 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0414 11:36:27.283684 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0414 11:36:27.306303 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0414 11:36:27.311193 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0414 11:36:27.311218 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0414 11:36:27.765183 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0414 11:36:27.805121 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0414 11:36:27.949212 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0414 11:36:27.950226 1 controller.go:606] quota admission added evaluator for: endpoints
I0414 11:36:27.953509 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0414 11:36:28.247115 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0414 11:36:28.917625 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0414 11:36:29.378688 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0414 11:36:29.423142 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0414 11:36:45.155966 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0414 11:36:45.173948 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0414 11:37:00.692857 1 client.go:360] parsed scheme: "passthrough"
I0414 11:37:00.692897 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:37:00.692906 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 11:37:33.387948 1 client.go:360] parsed scheme: "passthrough"
I0414 11:37:33.387990 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:37:33.388023 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 11:38:06.204038 1 client.go:360] parsed scheme: "passthrough"
I0414 11:38:06.204347 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:38:06.204441 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [e808a7edc1bef543a4087d91ae5c2ee0e55080a7ebbb9d2b1aca1f9ef59584a8] <==
I0414 11:41:34.287807 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:41:34.287815 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 11:42:08.796006 1 client.go:360] parsed scheme: "passthrough"
I0414 11:42:08.796275 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:42:08.796397 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0414 11:42:12.289018 1 handler_proxy.go:102] no RequestInfo found in the context
E0414 11:42:12.289240 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0414 11:42:12.289259 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0414 11:42:45.869146 1 client.go:360] parsed scheme: "passthrough"
I0414 11:42:45.869187 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:42:45.869196 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 11:43:26.374758 1 client.go:360] parsed scheme: "passthrough"
I0414 11:43:26.374822 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:43:26.374855 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0414 11:44:08.793534 1 client.go:360] parsed scheme: "passthrough"
I0414 11:44:08.793581 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:44:08.793589 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0414 11:44:10.809696 1 handler_proxy.go:102] no RequestInfo found in the context
E0414 11:44:10.809963 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0414 11:44:10.809982 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0414 11:44:44.711062 1 client.go:360] parsed scheme: "passthrough"
I0414 11:44:44.711105 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0414 11:44:44.711114 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [2a5a538079e469c67b6e5ff15238d2e56b50b050adec15288668075ef4d1f8e6] <==
W0414 11:40:35.602638 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 11:40:59.295341 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 11:41:07.253099 1 request.go:655] Throttling request took 1.048474623s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0414 11:41:08.104464 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 11:41:29.800429 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 11:41:39.754800 1 request.go:655] Throttling request took 1.048395991s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0414 11:41:40.606603 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 11:42:00.306507 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 11:42:12.256984 1 request.go:655] Throttling request took 1.048481579s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
W0414 11:42:13.108452 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 11:42:30.808188 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 11:42:44.758875 1 request.go:655] Throttling request took 1.048291066s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v1?timeout=32s
W0414 11:42:45.610319 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 11:43:01.310043 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 11:43:17.213159 1 request.go:655] Throttling request took 1.000881234s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
W0414 11:43:18.113474 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 11:43:31.811946 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 11:43:49.763928 1 request.go:655] Throttling request took 1.04839686s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0414 11:43:50.615288 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 11:44:02.314160 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 11:44:22.265871 1 request.go:655] Throttling request took 1.04824586s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0414 11:44:23.117351 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0414 11:44:32.829581 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0414 11:44:54.767740 1 request.go:655] Throttling request took 1.04810435s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0414 11:44:55.619590 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [d980a0c3a521acf80d9d000b62e5487a6e2a9cca9211cf9ce1cf98291bd483a6] <==
I0414 11:36:45.191909 1 shared_informer.go:247] Caches are synced for disruption
I0414 11:36:45.191943 1 disruption.go:339] Sending events to api server.
I0414 11:36:45.192033 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0414 11:36:45.203698 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rhrdw"
I0414 11:36:45.231005 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-b6hq8"
I0414 11:36:45.236225 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fv898"
I0414 11:36:45.245842 1 shared_informer.go:247] Caches are synced for endpoint
I0414 11:36:45.246099 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0414 11:36:45.269485 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-cx4np"
I0414 11:36:45.294841 1 shared_informer.go:247] Caches are synced for attach detach
I0414 11:36:45.393930 1 shared_informer.go:247] Caches are synced for resource quota
I0414 11:36:45.480597 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0414 11:36:45.740936 1 shared_informer.go:247] Caches are synced for garbage collector
I0414 11:36:45.740966 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0414 11:36:45.780826 1 shared_informer.go:247] Caches are synced for garbage collector
I0414 11:36:46.140839 1 request.go:655] Throttling request took 1.041100394s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
I0414 11:36:46.721534 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0414 11:36:46.773073 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-cx4np"
I0414 11:36:46.942717 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0414 11:36:46.942751 1 shared_informer.go:247] Caches are synced for resource quota
I0414 11:36:50.122559 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0414 11:38:30.306637 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0414 11:38:30.580031 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0414 11:38:30.581252 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0414 11:38:31.413509 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-7rqxd"
==> kube-proxy [13c73f9eba6698d2af640373e7b8e39d52d664ecacdd0456aed1e6a9d8b225d1] <==
I0414 11:36:46.181660 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0414 11:36:46.181821 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0414 11:36:46.245624 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0414 11:36:46.245823 1 server_others.go:185] Using iptables Proxier.
I0414 11:36:46.246123 1 server.go:650] Version: v1.20.0
I0414 11:36:46.250510 1 config.go:315] Starting service config controller
I0414 11:36:46.257685 1 shared_informer.go:240] Waiting for caches to sync for service config
I0414 11:36:46.257702 1 shared_informer.go:247] Caches are synced for service config
I0414 11:36:46.255038 1 config.go:224] Starting endpoint slice config controller
I0414 11:36:46.258025 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0414 11:36:46.358163 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [79c9b42c48cf90141ff668b53f7e9b07e00d01eb9ccb093d04eb9a7bb095e803] <==
I0414 11:39:12.290033 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0414 11:39:12.290298 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0414 11:39:12.323112 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0414 11:39:12.323233 1 server_others.go:185] Using iptables Proxier.
I0414 11:39:12.323480 1 server.go:650] Version: v1.20.0
I0414 11:39:12.323905 1 config.go:315] Starting service config controller
I0414 11:39:12.323913 1 shared_informer.go:240] Waiting for caches to sync for service config
I0414 11:39:12.323927 1 config.go:224] Starting endpoint slice config controller
I0414 11:39:12.323931 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0414 11:39:12.424152 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0414 11:39:12.429863 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [1fccd9ab2f416209ffcb46fc280b9a5e065078b5eec940eccdfb5e454e965d6e] <==
I0414 11:39:04.649206 1 serving.go:331] Generated self-signed cert in-memory
W0414 11:39:09.682695 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0414 11:39:09.682733 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0414 11:39:09.682743 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0414 11:39:09.682749 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0414 11:39:09.993531 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0414 11:39:09.993688 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 11:39:09.993699 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 11:39:09.993711 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0414 11:39:10.097985 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [f0cc66fe654dbe9a48354054e5e18a5e24e2abf9fb2fe2d59d6468869ca5d993] <==
I0414 11:36:22.680904 1 serving.go:331] Generated self-signed cert in-memory
W0414 11:36:26.405731 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0414 11:36:26.405839 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0414 11:36:26.405876 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0414 11:36:26.405903 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0414 11:36:26.505864 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 11:36:26.505966 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0414 11:36:26.506227 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0414 11:36:26.506303 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0414 11:36:26.518337 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0414 11:36:26.518557 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0414 11:36:26.530361 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0414 11:36:26.537032 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0414 11:36:26.542995 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0414 11:36:26.543831 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0414 11:36:26.545058 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0414 11:36:26.545243 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0414 11:36:26.545314 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0414 11:36:26.545365 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0414 11:36:26.545472 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0414 11:36:26.543157 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0414 11:36:27.906115 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 14 11:43:39 old-k8s-version-943255 kubelet[661]: I0414 11:43:39.191479 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46
Apr 14 11:43:39 old-k8s-version-943255 kubelet[661]: E0414 11:43:39.191923 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:43:48 old-k8s-version-943255 kubelet[661]: E0414 11:43:48.192882 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:43:54 old-k8s-version-943255 kubelet[661]: I0414 11:43:54.191621 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46
Apr 14 11:43:54 old-k8s-version-943255 kubelet[661]: E0414 11:43:54.192005 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:43:59 old-k8s-version-943255 kubelet[661]: E0414 11:43:59.193181 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:05 old-k8s-version-943255 kubelet[661]: I0414 11:44:05.191536 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46
Apr 14 11:44:05 old-k8s-version-943255 kubelet[661]: E0414 11:44:05.192322 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:11 old-k8s-version-943255 kubelet[661]: E0414 11:44:11.192365 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: I0414 11:44:16.191631 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46
Apr 14 11:44:16 old-k8s-version-943255 kubelet[661]: E0414 11:44:16.192478 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:23 old-k8s-version-943255 kubelet[661]: E0414 11:44:23.192167 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: I0414 11:44:29.193211 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46
Apr 14 11:44:29 old-k8s-version-943255 kubelet[661]: E0414 11:44:29.194021 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:37 old-k8s-version-943255 kubelet[661]: E0414 11:44:37.193029 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 14 11:44:41 old-k8s-version-943255 kubelet[661]: I0414 11:44:41.191471 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46
Apr 14 11:44:41 old-k8s-version-943255 kubelet[661]: E0414 11:44:41.192234 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
Apr 14 11:44:49 old-k8s-version-943255 kubelet[661]: E0414 11:44:49.218471 661 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 14 11:44:49 old-k8s-version-943255 kubelet[661]: E0414 11:44:49.218908 661 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 14 11:44:49 old-k8s-version-943255 kubelet[661]: E0414 11:44:49.219478 661 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-9hmzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-7rqxd_kube-system(71fc092
3-7b65-4b09-a48f-0a3c71066699): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 14 11:44:49 old-k8s-version-943255 kubelet[661]: E0414 11:44:49.219674 661 pod_workers.go:191] Error syncing pod 71fc0923-7b65-4b09-a48f-0a3c71066699 ("metrics-server-9975d5f86-7rqxd_kube-system(71fc0923-7b65-4b09-a48f-0a3c71066699)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 14 11:44:56 old-k8s-version-943255 kubelet[661]: I0414 11:44:56.191479 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46
Apr 14 11:44:56 old-k8s-version-943255 kubelet[661]: I0414 11:44:56.535868 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: d6d49024b726da144f3869b130023bed07ed052c7c182852a0ee2efa6c793d46
Apr 14 11:44:56 old-k8s-version-943255 kubelet[661]: I0414 11:44:56.536234 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: f3fe1f3572c032d372784638fc602d8ef649f2ab7feefa7bf203c9122260a145
Apr 14 11:44:56 old-k8s-version-943255 kubelet[661]: E0414 11:44:56.536520 661 pod_workers.go:191] Error syncing pod 8099402f-99d6-49b6-8298-f919ba25dd03 ("dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-jpjgb_kubernetes-dashboard(8099402f-99d6-49b6-8298-f919ba25dd03)"
==> kubernetes-dashboard [59b5daad610a953bdfa365e57443632131e315380f3932f1e70fc6821886a566] <==
2025/04/14 11:39:35 Using namespace: kubernetes-dashboard
2025/04/14 11:39:35 Using in-cluster config to connect to apiserver
2025/04/14 11:39:35 Using secret token for csrf signing
2025/04/14 11:39:35 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/04/14 11:39:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/04/14 11:39:35 Successful initial request to the apiserver, version: v1.20.0
2025/04/14 11:39:35 Generating JWE encryption key
2025/04/14 11:39:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/04/14 11:39:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/04/14 11:39:36 Initializing JWE encryption key from synchronized object
2025/04/14 11:39:36 Creating in-cluster Sidecar client
2025/04/14 11:39:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:39:36 Serving insecurely on HTTP port: 9090
2025/04/14 11:40:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:40:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:41:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:41:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:42:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:42:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:43:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:43:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:44:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:44:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/14 11:39:35 Starting overwatch
==> storage-provisioner [daf4175e5b7abdd9b6fc24d967616abee8e58d39811a21b110b4ed1c20dcdbd9] <==
I0414 11:39:11.860848 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0414 11:39:41.863147 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [e91667bb9e4d3a4a67ee3b7d7f830b9bbced4428dc545e292333a361a41354aa] <==
I0414 11:39:56.385970 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0414 11:39:56.429994 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0414 11:39:56.430256 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0414 11:40:13.911900 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0414 11:40:13.912429 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d16a269a-4a17-4241-b129-fe6922ff43b1", APIVersion:"v1", ResourceVersion:"844", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-943255_b4263a48-4a5c-4021-a34a-33f437092d5f became leader
I0414 11:40:13.912592 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-943255_b4263a48-4a5c-4021-a34a-33f437092d5f!
I0414 11:40:14.013440 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-943255_b4263a48-4a5c-4021-a34a-33f437092d5f!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-943255 -n old-k8s-version-943255
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-943255 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-7rqxd
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-943255 describe pod metrics-server-9975d5f86-7rqxd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-943255 describe pod metrics-server-9975d5f86-7rqxd: exit status 1 (110.553059ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-7rqxd" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-943255 describe pod metrics-server-9975d5f86-7rqxd: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.89s)