=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-789808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-789808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m19.886808328s)
-- stdout --
* [old-k8s-version-789808] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20604
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20604-581234/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-581234/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-789808" primary control-plane node in "old-k8s-version-789808" cluster
* Pulling base image v0.0.46-1744107393-20604 ...
* Restarting existing docker container for "old-k8s-version-789808" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-789808 addons enable metrics-server
* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
-- /stdout --
** stderr **
I0408 19:28:42.503090 800094 out.go:345] Setting OutFile to fd 1 ...
I0408 19:28:42.503340 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 19:28:42.503372 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:28:42.503396 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 19:28:42.503778 800094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-581234/.minikube/bin
I0408 19:28:42.504340 800094 out.go:352] Setting JSON to false
I0408 19:28:42.505435 800094 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11474,"bootTime":1744129049,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0408 19:28:42.505533 800094 start.go:139] virtualization:
I0408 19:28:42.509156 800094 out.go:177] * [old-k8s-version-789808] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0408 19:28:42.513230 800094 notify.go:220] Checking for updates...
I0408 19:28:42.517183 800094 out.go:177] - MINIKUBE_LOCATION=20604
I0408 19:28:42.520729 800094 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0408 19:28:42.523675 800094 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20604-581234/kubeconfig
I0408 19:28:42.526631 800094 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-581234/.minikube
I0408 19:28:42.529546 800094 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0408 19:28:42.532504 800094 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0408 19:28:42.535824 800094 config.go:182] Loaded profile config "old-k8s-version-789808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0408 19:28:42.539508 800094 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0408 19:28:42.542617 800094 driver.go:394] Setting default libvirt URI to qemu:///system
I0408 19:28:42.584228 800094 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0408 19:28:42.584343 800094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0408 19:28:42.680187 800094 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-04-08 19:28:42.66969143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Ser
verErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0408 19:28:42.680291 800094 docker.go:318] overlay module found
I0408 19:28:42.685320 800094 out.go:177] * Using the docker driver based on existing profile
I0408 19:28:42.688350 800094 start.go:297] selected driver: docker
I0408 19:28:42.688373 800094 start.go:901] validating driver "docker" against &{Name:old-k8s-version-789808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789808 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 19:28:42.688479 800094 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0408 19:28:42.689189 800094 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0408 19:28:42.779252 800094 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-04-08 19:28:42.769750881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0408 19:28:42.779617 800094 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0408 19:28:42.779657 800094 cni.go:84] Creating CNI manager for ""
I0408 19:28:42.779719 800094 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0408 19:28:42.779764 800094 start.go:340] cluster config:
{Name:old-k8s-version-789808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 19:28:42.783278 800094 out.go:177] * Starting "old-k8s-version-789808" primary control-plane node in "old-k8s-version-789808" cluster
I0408 19:28:42.786940 800094 cache.go:121] Beginning downloading kic base image for docker with containerd
I0408 19:28:42.789894 800094 out.go:177] * Pulling base image v0.0.46-1744107393-20604 ...
I0408 19:28:42.792755 800094 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0408 19:28:42.792818 800094 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-581234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0408 19:28:42.792831 800094 cache.go:56] Caching tarball of preloaded images
I0408 19:28:42.792851 800094 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
I0408 19:28:42.792913 800094 preload.go:172] Found /home/jenkins/minikube-integration/20604-581234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0408 19:28:42.792923 800094 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0408 19:28:42.793041 800094 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/config.json ...
I0408 19:28:42.813732 800094 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon, skipping pull
I0408 19:28:42.813755 800094 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in daemon, skipping load
I0408 19:28:42.813774 800094 cache.go:230] Successfully downloaded all kic artifacts
I0408 19:28:42.813797 800094 start.go:360] acquireMachinesLock for old-k8s-version-789808: {Name:mkb10d46acc3b5a9b97ffa8fce768be8cc5bbb18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 19:28:42.813855 800094 start.go:364] duration metric: took 34.421µs to acquireMachinesLock for "old-k8s-version-789808"
I0408 19:28:42.813880 800094 start.go:96] Skipping create...Using existing machine configuration
I0408 19:28:42.813886 800094 fix.go:54] fixHost starting:
I0408 19:28:42.814138 800094 cli_runner.go:164] Run: docker container inspect old-k8s-version-789808 --format={{.State.Status}}
I0408 19:28:42.834320 800094 fix.go:112] recreateIfNeeded on old-k8s-version-789808: state=Stopped err=<nil>
W0408 19:28:42.834348 800094 fix.go:138] unexpected machine state, will restart: <nil>
I0408 19:28:42.838244 800094 out.go:177] * Restarting existing docker container for "old-k8s-version-789808" ...
I0408 19:28:42.841133 800094 cli_runner.go:164] Run: docker start old-k8s-version-789808
I0408 19:28:43.149568 800094 cli_runner.go:164] Run: docker container inspect old-k8s-version-789808 --format={{.State.Status}}
I0408 19:28:43.182453 800094 kic.go:430] container "old-k8s-version-789808" state is running.
I0408 19:28:43.185795 800094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-789808
I0408 19:28:43.208745 800094 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/config.json ...
I0408 19:28:43.209013 800094 machine.go:93] provisionDockerMachine start ...
I0408 19:28:43.209091 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:43.240414 800094 main.go:141] libmachine: Using SSH client type: native
I0408 19:28:43.242462 800094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33799 <nil> <nil>}
I0408 19:28:43.242647 800094 main.go:141] libmachine: About to run SSH command:
hostname
I0408 19:28:43.244920 800094 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0408 19:28:46.382428 800094 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-789808
I0408 19:28:46.382551 800094 ubuntu.go:169] provisioning hostname "old-k8s-version-789808"
I0408 19:28:46.382655 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:46.415246 800094 main.go:141] libmachine: Using SSH client type: native
I0408 19:28:46.415549 800094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33799 <nil> <nil>}
I0408 19:28:46.415567 800094 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-789808 && echo "old-k8s-version-789808" | sudo tee /etc/hostname
I0408 19:28:46.556835 800094 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-789808
I0408 19:28:46.556916 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:46.577038 800094 main.go:141] libmachine: Using SSH client type: native
I0408 19:28:46.577331 800094 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33799 <nil> <nil>}
I0408 19:28:46.577348 800094 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-789808' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-789808/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-789808' | sudo tee -a /etc/hosts;
fi
fi
I0408 19:28:46.702717 800094 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0408 19:28:46.702745 800094 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20604-581234/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-581234/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-581234/.minikube}
I0408 19:28:46.702764 800094 ubuntu.go:177] setting up certificates
I0408 19:28:46.702775 800094 provision.go:84] configureAuth start
I0408 19:28:46.702834 800094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-789808
I0408 19:28:46.720393 800094 provision.go:143] copyHostCerts
I0408 19:28:46.720454 800094 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-581234/.minikube/ca.pem, removing ...
I0408 19:28:46.720471 800094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-581234/.minikube/ca.pem
I0408 19:28:46.720544 800094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-581234/.minikube/ca.pem (1078 bytes)
I0408 19:28:46.720644 800094 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-581234/.minikube/cert.pem, removing ...
I0408 19:28:46.720649 800094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-581234/.minikube/cert.pem
I0408 19:28:46.720675 800094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-581234/.minikube/cert.pem (1123 bytes)
I0408 19:28:46.720737 800094 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-581234/.minikube/key.pem, removing ...
I0408 19:28:46.720742 800094 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-581234/.minikube/key.pem
I0408 19:28:46.720765 800094 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-581234/.minikube/key.pem (1679 bytes)
I0408 19:28:46.720828 800094 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-581234/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-789808 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-789808]
I0408 19:28:47.169724 800094 provision.go:177] copyRemoteCerts
I0408 19:28:47.169819 800094 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0408 19:28:47.169868 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:47.189718 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:47.284358 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0408 19:28:47.314261 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0408 19:28:47.345918 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0408 19:28:47.375693 800094 provision.go:87] duration metric: took 672.905197ms to configureAuth
I0408 19:28:47.375766 800094 ubuntu.go:193] setting minikube options for container-runtime
I0408 19:28:47.376005 800094 config.go:182] Loaded profile config "old-k8s-version-789808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0408 19:28:47.376023 800094 machine.go:96] duration metric: took 4.166993611s to provisionDockerMachine
I0408 19:28:47.376033 800094 start.go:293] postStartSetup for "old-k8s-version-789808" (driver="docker")
I0408 19:28:47.376056 800094 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0408 19:28:47.376133 800094 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0408 19:28:47.376223 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:47.396042 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:47.487740 800094 ssh_runner.go:195] Run: cat /etc/os-release
I0408 19:28:47.490990 800094 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0408 19:28:47.491067 800094 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0408 19:28:47.491085 800094 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0408 19:28:47.491093 800094 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0408 19:28:47.491104 800094 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-581234/.minikube/addons for local assets ...
I0408 19:28:47.491161 800094 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-581234/.minikube/files for local assets ...
I0408 19:28:47.491257 800094 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-581234/.minikube/files/etc/ssl/certs/5866092.pem -> 5866092.pem in /etc/ssl/certs
I0408 19:28:47.491363 800094 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0408 19:28:47.501628 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/files/etc/ssl/certs/5866092.pem --> /etc/ssl/certs/5866092.pem (1708 bytes)
I0408 19:28:47.527210 800094 start.go:296] duration metric: took 151.161169ms for postStartSetup
I0408 19:28:47.527335 800094 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0408 19:28:47.527415 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:47.546164 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:47.636246 800094 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0408 19:28:47.640727 800094 fix.go:56] duration metric: took 4.826833488s for fixHost
I0408 19:28:47.640759 800094 start.go:83] releasing machines lock for "old-k8s-version-789808", held for 4.826889857s
I0408 19:28:47.640834 800094 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-789808
I0408 19:28:47.657963 800094 ssh_runner.go:195] Run: cat /version.json
I0408 19:28:47.658013 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:47.658088 800094 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0408 19:28:47.658151 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:47.677767 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:47.686610 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:47.946273 800094 ssh_runner.go:195] Run: systemctl --version
I0408 19:28:47.950686 800094 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0408 19:28:47.954919 800094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0408 19:28:47.973948 800094 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0408 19:28:47.974069 800094 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0408 19:28:47.983344 800094 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0408 19:28:47.983382 800094 start.go:495] detecting cgroup driver to use...
I0408 19:28:47.983449 800094 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0408 19:28:47.983532 800094 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0408 19:28:47.998762 800094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0408 19:28:48.014295 800094 docker.go:217] disabling cri-docker service (if available) ...
I0408 19:28:48.014421 800094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0408 19:28:48.028453 800094 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0408 19:28:48.041704 800094 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0408 19:28:48.141229 800094 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0408 19:28:48.239143 800094 docker.go:233] disabling docker service ...
I0408 19:28:48.239239 800094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0408 19:28:48.260939 800094 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0408 19:28:48.274179 800094 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0408 19:28:48.391381 800094 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0408 19:28:48.490014 800094 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0408 19:28:48.506527 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0408 19:28:48.523698 800094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0408 19:28:48.536052 800094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0408 19:28:48.551017 800094 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0408 19:28:48.551133 800094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0408 19:28:48.569407 800094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0408 19:28:48.580749 800094 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0408 19:28:48.590860 800094 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0408 19:28:48.601947 800094 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0408 19:28:48.611567 800094 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0408 19:28:48.621567 800094 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0408 19:28:48.631468 800094 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0408 19:28:48.640038 800094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 19:28:48.772514 800094 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0408 19:28:49.024175 800094 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0408 19:28:49.024295 800094 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0408 19:28:49.029154 800094 start.go:563] Will wait 60s for crictl version
I0408 19:28:49.029285 800094 ssh_runner.go:195] Run: which crictl
I0408 19:28:49.033256 800094 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0408 19:28:49.084023 800094 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0408 19:28:49.084138 800094 ssh_runner.go:195] Run: containerd --version
I0408 19:28:49.117127 800094 ssh_runner.go:195] Run: containerd --version
I0408 19:28:49.145108 800094 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.27 ...
I0408 19:28:49.148057 800094 cli_runner.go:164] Run: docker network inspect old-k8s-version-789808 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0408 19:28:49.165186 800094 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0408 19:28:49.169127 800094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0408 19:28:49.187898 800094 kubeadm.go:883] updating cluster {Name:old-k8s-version-789808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789808 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0408 19:28:49.188024 800094 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0408 19:28:49.188089 800094 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 19:28:49.267158 800094 containerd.go:627] all images are preloaded for containerd runtime.
I0408 19:28:49.267179 800094 containerd.go:534] Images already preloaded, skipping extraction
I0408 19:28:49.267236 800094 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 19:28:49.322665 800094 containerd.go:627] all images are preloaded for containerd runtime.
I0408 19:28:49.322686 800094 cache_images.go:84] Images are preloaded, skipping loading
I0408 19:28:49.322694 800094 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0408 19:28:49.322808 800094 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-789808 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789808 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0408 19:28:49.322872 800094 ssh_runner.go:195] Run: sudo crictl info
I0408 19:28:49.373760 800094 cni.go:84] Creating CNI manager for ""
I0408 19:28:49.373781 800094 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0408 19:28:49.373791 800094 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0408 19:28:49.373810 800094 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-789808 NodeName:old-k8s-version-789808 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0408 19:28:49.373941 800094 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-789808"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0408 19:28:49.374007 800094 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0408 19:28:49.384865 800094 binaries.go:44] Found k8s binaries, skipping transfer
I0408 19:28:49.384988 800094 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0408 19:28:49.396172 800094 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0408 19:28:49.416211 800094 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0408 19:28:49.434698 800094 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0408 19:28:49.461136 800094 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0408 19:28:49.471804 800094 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0408 19:28:49.484230 800094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 19:28:49.616203 800094 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0408 19:28:49.634425 800094 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808 for IP: 192.168.76.2
I0408 19:28:49.634447 800094 certs.go:194] generating shared ca certs ...
I0408 19:28:49.634462 800094 certs.go:226] acquiring lock for ca certs: {Name:mkbcf8d523d57729eb1fc091129687c3aa71d028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:28:49.634749 800094 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-581234/.minikube/ca.key
I0408 19:28:49.634841 800094 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-581234/.minikube/proxy-client-ca.key
I0408 19:28:49.634857 800094 certs.go:256] generating profile certs ...
I0408 19:28:49.634979 800094 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/client.key
I0408 19:28:49.635087 800094 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/apiserver.key.0faaa341
I0408 19:28:49.635167 800094 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/proxy-client.key
I0408 19:28:49.635313 800094 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/586609.pem (1338 bytes)
W0408 19:28:49.635368 800094 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-581234/.minikube/certs/586609_empty.pem, impossibly tiny 0 bytes
I0408 19:28:49.635383 800094 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca-key.pem (1675 bytes)
I0408 19:28:49.635408 800094 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem (1078 bytes)
I0408 19:28:49.635458 800094 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/cert.pem (1123 bytes)
I0408 19:28:49.635494 800094 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/key.pem (1679 bytes)
I0408 19:28:49.635604 800094 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/files/etc/ssl/certs/5866092.pem (1708 bytes)
I0408 19:28:49.636476 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0408 19:28:49.675777 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0408 19:28:49.739875 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0408 19:28:49.810982 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0408 19:28:49.878665 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0408 19:28:49.932997 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0408 19:28:49.988314 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0408 19:28:50.052404 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/old-k8s-version-789808/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0408 19:28:50.137858 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/files/etc/ssl/certs/5866092.pem --> /usr/share/ca-certificates/5866092.pem (1708 bytes)
I0408 19:28:50.178596 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0408 19:28:50.230942 800094 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/certs/586609.pem --> /usr/share/ca-certificates/586609.pem (1338 bytes)
I0408 19:28:50.266814 800094 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0408 19:28:50.289736 800094 ssh_runner.go:195] Run: openssl version
I0408 19:28:50.298080 800094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0408 19:28:50.310855 800094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0408 19:28:50.316026 800094 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 8 18:41 /usr/share/ca-certificates/minikubeCA.pem
I0408 19:28:50.316102 800094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0408 19:28:50.326061 800094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0408 19:28:50.335890 800094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586609.pem && ln -fs /usr/share/ca-certificates/586609.pem /etc/ssl/certs/586609.pem"
I0408 19:28:50.346416 800094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586609.pem
I0408 19:28:50.353220 800094 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 8 18:49 /usr/share/ca-certificates/586609.pem
I0408 19:28:50.353336 800094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586609.pem
I0408 19:28:50.362270 800094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586609.pem /etc/ssl/certs/51391683.0"
I0408 19:28:50.375551 800094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5866092.pem && ln -fs /usr/share/ca-certificates/5866092.pem /etc/ssl/certs/5866092.pem"
I0408 19:28:50.389353 800094 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5866092.pem
I0408 19:28:50.393179 800094 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 8 18:49 /usr/share/ca-certificates/5866092.pem
I0408 19:28:50.393312 800094 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5866092.pem
I0408 19:28:50.401567 800094 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5866092.pem /etc/ssl/certs/3ec20f2e.0"
I0408 19:28:50.415953 800094 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0408 19:28:50.420451 800094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0408 19:28:50.430967 800094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0408 19:28:50.439394 800094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0408 19:28:50.451005 800094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0408 19:28:50.458450 800094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0408 19:28:50.468622 800094 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0408 19:28:50.479117 800094 kubeadm.go:392] StartCluster: {Name:old-k8s-version-789808 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-789808 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 19:28:50.479229 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0408 19:28:50.479351 800094 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0408 19:28:50.555802 800094 cri.go:89] found id: "ae1fdec722609fbe82a250e7c19dd6ed16f8b414530dde41d3a421888a1ebf65"
I0408 19:28:50.555846 800094 cri.go:89] found id: "a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:28:50.555867 800094 cri.go:89] found id: "f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:28:50.555879 800094 cri.go:89] found id: "07479cb40f85c22e98bc4ec1b04526cc4e11a5a42a8a7f29a0495a15e1da05fb"
I0408 19:28:50.555883 800094 cri.go:89] found id: "866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:28:50.555887 800094 cri.go:89] found id: "38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:28:50.555890 800094 cri.go:89] found id: "c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b"
I0408 19:28:50.555894 800094 cri.go:89] found id: "8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:28:50.555912 800094 cri.go:89] found id: "e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:28:50.555926 800094 cri.go:89] found id: ""
I0408 19:28:50.556006 800094 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0408 19:28:50.576640 800094 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-04-08T19:28:50Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0408 19:28:50.576806 800094 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0408 19:28:50.587202 800094 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0408 19:28:50.587242 800094 kubeadm.go:593] restartPrimaryControlPlane start ...
I0408 19:28:50.587363 800094 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0408 19:28:50.600074 800094 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0408 19:28:50.600718 800094 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-789808" does not appear in /home/jenkins/minikube-integration/20604-581234/kubeconfig
I0408 19:28:50.601030 800094 kubeconfig.go:62] /home/jenkins/minikube-integration/20604-581234/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-789808" cluster setting kubeconfig missing "old-k8s-version-789808" context setting]
I0408 19:28:50.601570 800094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/kubeconfig: {Name:mkdc378fe21dd04e3a9dd54b97606915163de072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:28:50.603311 800094 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0408 19:28:50.619876 800094 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0408 19:28:50.619910 800094 kubeadm.go:597] duration metric: took 32.661154ms to restartPrimaryControlPlane
I0408 19:28:50.619920 800094 kubeadm.go:394] duration metric: took 140.813223ms to StartCluster
I0408 19:28:50.619963 800094 settings.go:142] acquiring lock: {Name:mkbda39b60e68828178734543a93a563b6df0eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:28:50.620044 800094 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20604-581234/kubeconfig
I0408 19:28:50.621016 800094 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/kubeconfig: {Name:mkdc378fe21dd04e3a9dd54b97606915163de072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:28:50.621263 800094 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0408 19:28:50.621646 800094 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0408 19:28:50.621725 800094 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-789808"
I0408 19:28:50.621743 800094 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-789808"
W0408 19:28:50.621755 800094 addons.go:247] addon storage-provisioner should already be in state true
I0408 19:28:50.621779 800094 host.go:66] Checking if "old-k8s-version-789808" exists ...
I0408 19:28:50.622229 800094 cli_runner.go:164] Run: docker container inspect old-k8s-version-789808 --format={{.State.Status}}
I0408 19:28:50.622702 800094 config.go:182] Loaded profile config "old-k8s-version-789808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0408 19:28:50.622777 800094 addons.go:69] Setting dashboard=true in profile "old-k8s-version-789808"
I0408 19:28:50.622794 800094 addons.go:238] Setting addon dashboard=true in "old-k8s-version-789808"
W0408 19:28:50.622800 800094 addons.go:247] addon dashboard should already be in state true
I0408 19:28:50.622830 800094 host.go:66] Checking if "old-k8s-version-789808" exists ...
I0408 19:28:50.623221 800094 cli_runner.go:164] Run: docker container inspect old-k8s-version-789808 --format={{.State.Status}}
I0408 19:28:50.623629 800094 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-789808"
I0408 19:28:50.623651 800094 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-789808"
I0408 19:28:50.624132 800094 cli_runner.go:164] Run: docker container inspect old-k8s-version-789808 --format={{.State.Status}}
I0408 19:28:50.626643 800094 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-789808"
I0408 19:28:50.626704 800094 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-789808"
W0408 19:28:50.626726 800094 addons.go:247] addon metrics-server should already be in state true
I0408 19:28:50.626767 800094 host.go:66] Checking if "old-k8s-version-789808" exists ...
I0408 19:28:50.630004 800094 cli_runner.go:164] Run: docker container inspect old-k8s-version-789808 --format={{.State.Status}}
I0408 19:28:50.631300 800094 out.go:177] * Verifying Kubernetes components...
I0408 19:28:50.635600 800094 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 19:28:50.697147 800094 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0408 19:28:50.700140 800094 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0408 19:28:50.700163 800094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0408 19:28:50.700228 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:50.709667 800094 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-789808"
W0408 19:28:50.709689 800094 addons.go:247] addon default-storageclass should already be in state true
I0408 19:28:50.709716 800094 host.go:66] Checking if "old-k8s-version-789808" exists ...
I0408 19:28:50.710126 800094 cli_runner.go:164] Run: docker container inspect old-k8s-version-789808 --format={{.State.Status}}
I0408 19:28:50.715071 800094 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0408 19:28:50.718447 800094 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0408 19:28:50.721872 800094 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0408 19:28:50.721901 800094 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0408 19:28:50.721977 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:50.722176 800094 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0408 19:28:50.729941 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0408 19:28:50.729972 800094 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0408 19:28:50.730042 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:50.765775 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:50.778136 800094 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0408 19:28:50.778155 800094 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0408 19:28:50.778217 800094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-789808
I0408 19:28:50.799827 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:50.807304 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:50.814038 800094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/old-k8s-version-789808/id_rsa Username:docker}
I0408 19:28:50.884900 800094 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0408 19:28:50.924371 800094 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-789808" to be "Ready" ...
I0408 19:28:51.017940 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0408 19:28:51.076737 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0408 19:28:51.076759 800094 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0408 19:28:51.111339 800094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0408 19:28:51.111408 800094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0408 19:28:51.151642 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0408 19:28:51.170654 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0408 19:28:51.170741 800094 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0408 19:28:51.243538 800094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0408 19:28:51.243684 800094 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0408 19:28:51.281657 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0408 19:28:51.281752 800094 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0408 19:28:51.343604 800094 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0408 19:28:51.343682 800094 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0408 19:28:51.411149 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0408 19:28:51.411235 800094 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W0408 19:28:51.414122 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:51.414207 800094 retry.go:31] will retry after 288.772526ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:51.433875 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:51.433956 800094 retry.go:31] will retry after 365.038907ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:51.457013 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0408 19:28:51.478100 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0408 19:28:51.478182 800094 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0408 19:28:51.527948 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0408 19:28:51.528032 800094 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0408 19:28:51.610575 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0408 19:28:51.610595 800094 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
W0408 19:28:51.678082 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:51.678108 800094 retry.go:31] will retry after 332.446908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:51.683601 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0408 19:28:51.683625 800094 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0408 19:28:51.703437 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0408 19:28:51.720603 800094 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0408 19:28:51.720628 800094 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0408 19:28:51.771671 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0408 19:28:51.799774 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0408 19:28:51.876106 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:51.876139 800094 retry.go:31] will retry after 353.334505ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:51.996769 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:51.996801 800094 retry.go:31] will retry after 222.254437ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.011115 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0408 19:28:52.070266 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.070298 800094 retry.go:31] will retry after 335.795093ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:52.166321 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.166419 800094 retry.go:31] will retry after 388.821101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.219750 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0408 19:28:52.230656 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0408 19:28:52.407286 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0408 19:28:52.483851 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:52.484012 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.484031 800094 retry.go:31] will retry after 469.424865ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.483900 800094 retry.go:31] will retry after 367.321874ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.555677 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0408 19:28:52.601462 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.601498 800094 retry.go:31] will retry after 339.822619ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:52.709831 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.709867 800094 retry.go:31] will retry after 657.243885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.852242 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0408 19:28:52.925831 800094 node_ready.go:53] error getting node "old-k8s-version-789808": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-789808": dial tcp 192.168.76.2:8443: connect: connection refused
I0408 19:28:52.942072 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0408 19:28:52.954382 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0408 19:28:52.975351 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:52.975382 800094 retry.go:31] will retry after 672.800992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:53.122308 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:53.122341 800094 retry.go:31] will retry after 596.912605ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:53.174184 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:53.174232 800094 retry.go:31] will retry after 815.547269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:53.367609 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0408 19:28:53.488384 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:53.488421 800094 retry.go:31] will retry after 913.621121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:53.648776 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0408 19:28:53.720248 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0408 19:28:53.792634 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:53.792662 800094 retry.go:31] will retry after 1.045651101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:53.862177 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:53.862212 800094 retry.go:31] will retry after 862.673381ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:53.990096 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0408 19:28:54.084523 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:54.084558 800094 retry.go:31] will retry after 832.311485ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:54.403014 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0408 19:28:54.481273 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:54.481357 800094 retry.go:31] will retry after 1.422929943s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:54.725461 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0408 19:28:54.801112 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:54.801151 800094 retry.go:31] will retry after 1.300047762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:54.839361 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0408 19:28:54.917189 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0408 19:28:54.919737 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:54.919818 800094 retry.go:31] will retry after 1.392896079s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0408 19:28:55.015301 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:55.015345 800094 retry.go:31] will retry after 2.815396596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:55.425225 800094 node_ready.go:53] error getting node "old-k8s-version-789808": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-789808": dial tcp 192.168.76.2:8443: connect: connection refused
I0408 19:28:55.904960 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0408 19:28:55.985368 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:55.985403 800094 retry.go:31] will retry after 1.093929052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:56.101620 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0408 19:28:56.181523 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:56.181554 800094 retry.go:31] will retry after 2.385417508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:56.313846 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0408 19:28:56.390566 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:56.390608 800094 retry.go:31] will retry after 1.711280426s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:57.080327 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0408 19:28:57.158712 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:57.158745 800094 retry.go:31] will retry after 2.189380076s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:57.425497 800094 node_ready.go:53] error getting node "old-k8s-version-789808": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-789808": dial tcp 192.168.76.2:8443: connect: connection refused
I0408 19:28:57.831108 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0408 19:28:57.913517 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:57.913549 800094 retry.go:31] will retry after 1.785760206s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:58.102578 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0408 19:28:58.181333 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:58.181368 800094 retry.go:31] will retry after 2.527884123s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:58.567231 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0408 19:28:58.665965 800094 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:58.666002 800094 retry.go:31] will retry after 5.763901747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0408 19:28:59.348327 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0408 19:28:59.699509 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0408 19:29:00.710144 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0408 19:29:04.430098 800094 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0408 19:29:07.517827 800094 node_ready.go:49] node "old-k8s-version-789808" has status "Ready":"True"
I0408 19:29:07.517857 800094 node_ready.go:38] duration metric: took 16.593448518s for node "old-k8s-version-789808" to be "Ready" ...
I0408 19:29:07.517873 800094 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0408 19:29:07.733353 800094 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-pcfpp" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.762775 800094 pod_ready.go:93] pod "coredns-74ff55c5b-pcfpp" in "kube-system" namespace has status "Ready":"True"
I0408 19:29:07.762795 800094 pod_ready.go:82] duration metric: took 29.412656ms for pod "coredns-74ff55c5b-pcfpp" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.762808 800094 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-789808" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.795439 800094 pod_ready.go:93] pod "etcd-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"True"
I0408 19:29:07.795513 800094 pod_ready.go:82] duration metric: took 32.697174ms for pod "etcd-old-k8s-version-789808" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.795541 800094 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-789808" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.833851 800094 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"True"
I0408 19:29:07.833927 800094 pod_ready.go:82] duration metric: took 38.334744ms for pod "kube-apiserver-old-k8s-version-789808" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.833953 800094 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-789808" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.844911 800094 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"True"
I0408 19:29:07.844984 800094 pod_ready.go:82] duration metric: took 11.009475ms for pod "kube-controller-manager-old-k8s-version-789808" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.845011 800094 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n8gzl" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.867915 800094 pod_ready.go:93] pod "kube-proxy-n8gzl" in "kube-system" namespace has status "Ready":"True"
I0408 19:29:07.867996 800094 pod_ready.go:82] duration metric: took 22.964086ms for pod "kube-proxy-n8gzl" in "kube-system" namespace to be "Ready" ...
I0408 19:29:07.868025 800094 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace to be "Ready" ...
I0408 19:29:08.952588 800094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.604217695s)
I0408 19:29:08.952761 800094 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-789808"
I0408 19:29:08.952708 800094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.253168196s)
I0408 19:29:09.093036 800094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.382840269s)
I0408 19:29:09.093127 800094 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.662993921s)
I0408 19:29:09.096597 800094 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-789808 addons enable metrics-server
I0408 19:29:09.100397 800094 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
I0408 19:29:09.103308 800094 addons.go:514] duration metric: took 18.481653279s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
I0408 19:29:09.873742 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:12.373648 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:14.872610 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:16.873268 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:18.874762 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:21.374435 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:23.873184 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:25.883893 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:28.376503 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:30.878540 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:32.879334 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:35.373731 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:37.872998 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:39.873541 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:41.873919 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:43.874812 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:46.373916 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:48.374547 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:50.377738 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:52.875242 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:55.379604 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:29:57.880213 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:00.400981 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:02.874159 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:04.874357 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:07.373880 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:09.873830 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:12.374208 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:14.874074 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:17.372871 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:19.373065 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:21.373785 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:23.873360 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:25.873547 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:27.873721 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:30.373306 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:32.374005 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:34.374066 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:36.374399 800094 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:37.373503 800094 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace has status "Ready":"True"
I0408 19:30:37.373527 800094 pod_ready.go:82] duration metric: took 1m29.505481054s for pod "kube-scheduler-old-k8s-version-789808" in "kube-system" namespace to be "Ready" ...
I0408 19:30:37.373540 800094 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace to be "Ready" ...
I0408 19:30:39.379203 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:41.881438 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:43.885454 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:46.378967 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:48.882069 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:51.379703 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:53.881658 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:56.378671 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:30:58.378914 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:00.393314 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:02.878905 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:05.378838 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:07.381995 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:09.879994 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:11.883291 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:14.379461 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:16.379699 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:18.472227 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:20.879540 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:23.380202 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:25.880409 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:27.885774 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:30.379794 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:32.879325 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:34.882106 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:37.378866 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:39.881518 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:42.379740 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:44.879534 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:47.378208 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:49.378263 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:51.379038 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:53.379573 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:55.880271 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:31:58.378663 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:00.403473 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:02.879213 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:04.879565 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:07.378123 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:09.379558 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:11.881972 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:14.378888 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:16.379286 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:18.380494 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:20.879256 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:23.379765 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:25.380237 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:27.880929 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:29.881082 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:32.378771 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:34.378924 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:36.881046 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:39.382663 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:41.881482 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:44.378606 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:46.378981 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:48.380101 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:50.879662 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:53.379916 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:55.878929 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:57.882882 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:32:59.887092 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:02.380426 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:04.879596 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:06.883488 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:09.476658 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:11.879988 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:13.881554 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:16.378800 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:18.378999 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:20.379219 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:22.379540 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:24.878550 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:26.881954 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:29.392232 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:31.881471 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:33.883678 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:35.891728 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:38.379746 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:40.878797 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:42.886282 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:45.378976 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:47.880233 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:49.882546 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:52.379518 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:54.879235 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:56.881564 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:33:59.379358 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:01.379482 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:03.879272 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:05.880184 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:08.380751 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:10.879793 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:13.379115 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:15.880005 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:18.378897 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:20.379382 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:22.878538 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:24.879406 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:26.879831 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:29.378690 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:31.379170 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:33.379315 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:35.387807 800094 pod_ready.go:103] pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace has status "Ready":"False"
I0408 19:34:37.380661 800094 pod_ready.go:82] duration metric: took 4m0.007107167s for pod "metrics-server-9975d5f86-jmllj" in "kube-system" namespace to be "Ready" ...
E0408 19:34:37.380683 800094 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0408 19:34:37.380692 800094 pod_ready.go:39] duration metric: took 5m29.862798954s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0408 19:34:37.380708 800094 api_server.go:52] waiting for apiserver process to appear ...
I0408 19:34:37.380745 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0408 19:34:37.380800 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0408 19:34:37.456355 800094 cri.go:89] found id: "301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39"
I0408 19:34:37.456374 800094 cri.go:89] found id: "c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b"
I0408 19:34:37.456379 800094 cri.go:89] found id: ""
I0408 19:34:37.456386 800094 logs.go:282] 2 containers: [301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39 c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b]
I0408 19:34:37.456446 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.461407 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.466058 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0408 19:34:37.466126 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0408 19:34:37.530200 800094 cri.go:89] found id: "9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a"
I0408 19:34:37.530276 800094 cri.go:89] found id: "8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:34:37.530295 800094 cri.go:89] found id: ""
I0408 19:34:37.530319 800094 logs.go:282] 2 containers: [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a 8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca]
I0408 19:34:37.530399 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.534489 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.540237 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0408 19:34:37.540357 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0408 19:34:37.614840 800094 cri.go:89] found id: "c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439"
I0408 19:34:37.614914 800094 cri.go:89] found id: "a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:34:37.614933 800094 cri.go:89] found id: ""
I0408 19:34:37.614955 800094 logs.go:282] 2 containers: [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439 a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d]
I0408 19:34:37.615039 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.619956 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.624306 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0408 19:34:37.624433 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0408 19:34:37.686468 800094 cri.go:89] found id: "1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376"
I0408 19:34:37.686504 800094 cri.go:89] found id: "e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:34:37.686509 800094 cri.go:89] found id: ""
I0408 19:34:37.686517 800094 logs.go:282] 2 containers: [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376 e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0]
I0408 19:34:37.686572 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.691909 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.698018 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0408 19:34:37.698088 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0408 19:34:37.755563 800094 cri.go:89] found id: "b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2"
I0408 19:34:37.755582 800094 cri.go:89] found id: "866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:34:37.755587 800094 cri.go:89] found id: ""
I0408 19:34:37.755595 800094 logs.go:282] 2 containers: [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2 866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e]
I0408 19:34:37.755657 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.759379 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.762887 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0408 19:34:37.762962 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0408 19:34:37.823868 800094 cri.go:89] found id: "a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7"
I0408 19:34:37.823888 800094 cri.go:89] found id: "38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:34:37.823892 800094 cri.go:89] found id: ""
I0408 19:34:37.823900 800094 logs.go:282] 2 containers: [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7 38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6]
I0408 19:34:37.823960 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.834230 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.838096 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0408 19:34:37.838167 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0408 19:34:37.904103 800094 cri.go:89] found id: "f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98"
I0408 19:34:37.904126 800094 cri.go:89] found id: "f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:34:37.904132 800094 cri.go:89] found id: ""
I0408 19:34:37.904139 800094 logs.go:282] 2 containers: [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98 f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c]
I0408 19:34:37.904203 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.908566 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.913158 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0408 19:34:37.913229 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0408 19:34:37.962440 800094 cri.go:89] found id: "ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc"
I0408 19:34:37.962459 800094 cri.go:89] found id: ""
I0408 19:34:37.962467 800094 logs.go:282] 1 containers: [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc]
I0408 19:34:37.962552 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.966467 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0408 19:34:37.966549 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0408 19:34:38.024616 800094 cri.go:89] found id: "4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548"
I0408 19:34:38.024658 800094 cri.go:89] found id: "0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3"
I0408 19:34:38.024667 800094 cri.go:89] found id: ""
I0408 19:34:38.024676 800094 logs.go:282] 2 containers: [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548 0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3]
I0408 19:34:38.024759 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:38.029591 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:38.034251 800094 logs.go:123] Gathering logs for coredns [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439] ...
I0408 19:34:38.034281 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439"
I0408 19:34:38.092082 800094 logs.go:123] Gathering logs for coredns [a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d] ...
I0408 19:34:38.092108 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:34:38.150608 800094 logs.go:123] Gathering logs for kube-controller-manager [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7] ...
I0408 19:34:38.150633 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7"
I0408 19:34:38.244146 800094 logs.go:123] Gathering logs for kindnet [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98] ...
I0408 19:34:38.244222 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98"
I0408 19:34:38.335533 800094 logs.go:123] Gathering logs for kubernetes-dashboard [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc] ...
I0408 19:34:38.335717 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc"
I0408 19:34:38.415816 800094 logs.go:123] Gathering logs for storage-provisioner [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548] ...
I0408 19:34:38.415843 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548"
I0408 19:34:38.478451 800094 logs.go:123] Gathering logs for dmesg ...
I0408 19:34:38.478509 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0408 19:34:38.512484 800094 logs.go:123] Gathering logs for etcd [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a] ...
I0408 19:34:38.512511 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a"
I0408 19:34:38.602070 800094 logs.go:123] Gathering logs for kube-scheduler [e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0] ...
I0408 19:34:38.602154 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:34:38.670831 800094 logs.go:123] Gathering logs for kube-proxy [866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e] ...
I0408 19:34:38.670874 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:34:38.751971 800094 logs.go:123] Gathering logs for storage-provisioner [0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3] ...
I0408 19:34:38.751998 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3"
I0408 19:34:38.814079 800094 logs.go:123] Gathering logs for kube-apiserver [301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39] ...
I0408 19:34:38.814112 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39"
I0408 19:34:38.904557 800094 logs.go:123] Gathering logs for etcd [8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca] ...
I0408 19:34:38.904594 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:34:38.977383 800094 logs.go:123] Gathering logs for kube-controller-manager [38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6] ...
I0408 19:34:38.977473 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:34:39.073729 800094 logs.go:123] Gathering logs for containerd ...
I0408 19:34:39.073823 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0408 19:34:39.145383 800094 logs.go:123] Gathering logs for container status ...
I0408 19:34:39.145453 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0408 19:34:39.229500 800094 logs.go:123] Gathering logs for kube-apiserver [c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b] ...
I0408 19:34:39.229581 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b"
I0408 19:34:39.307911 800094 logs.go:123] Gathering logs for kube-scheduler [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376] ...
I0408 19:34:39.307970 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376"
I0408 19:34:39.369341 800094 logs.go:123] Gathering logs for kube-proxy [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2] ...
I0408 19:34:39.369372 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2"
I0408 19:34:39.427556 800094 logs.go:123] Gathering logs for kindnet [f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c] ...
I0408 19:34:39.427582 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:34:39.505938 800094 logs.go:123] Gathering logs for kubelet ...
I0408 19:34:39.506015 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0408 19:34:39.571204 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.448599 660 reflector.go:138] object-"kube-system"/"kube-proxy-token-hrsw6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-hrsw6" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.574379 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.454791 660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-62qzt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-62qzt" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.582617 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566273 660 reflector.go:138] object-"default"/"default-token-9t7wk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-9t7wk" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.582916 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566209 660 reflector.go:138] object-"kube-system"/"kindnet-token-f74cf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-f74cf" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.583148 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566367 660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.583386 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566456 660 reflector.go:138] object-"kube-system"/"coredns-token-tjnm4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tjnm4" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.583658 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566571 660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.583910 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566704 660 reflector.go:138] object-"kube-system"/"metrics-server-token-ntl9w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-ntl9w" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.593831 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:11 old-k8s-version-789808 kubelet[660]: E0408 19:29:11.608088 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.594100 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:12 old-k8s-version-789808 kubelet[660]: E0408 19:29:12.520827 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.597419 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:25 old-k8s-version-789808 kubelet[660]: E0408 19:29:25.314436 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.607006 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:36 old-k8s-version-789808 kubelet[660]: E0408 19:29:36.623687 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.607573 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:37 old-k8s-version-789808 kubelet[660]: E0408 19:29:37.628270 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.609574 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:39 old-k8s-version-789808 kubelet[660]: E0408 19:29:39.635668 660 pod_workers.go:191] Error syncing pod 8ea68ede-5c89-4238-b5e4-9811e9a34fc4 ("storage-provisioner_kube-system(8ea68ede-5c89-4238-b5e4-9811e9a34fc4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8ea68ede-5c89-4238-b5e4-9811e9a34fc4)"
W0408 19:34:39.610010 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:39 old-k8s-version-789808 kubelet[660]: E0408 19:29:39.706714 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.610228 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:40 old-k8s-version-789808 kubelet[660]: E0408 19:29:40.304587 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.613405 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:54 old-k8s-version-789808 kubelet[660]: E0408 19:29:54.338620 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.614050 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:55 old-k8s-version-789808 kubelet[660]: E0408 19:29:55.704770 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.614413 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:59 old-k8s-version-789808 kubelet[660]: E0408 19:29:59.707332 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.615037 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-789808 kubelet[660]: E0408 19:30:09.304394 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.615459 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:15 old-k8s-version-789808 kubelet[660]: E0408 19:30:15.304089 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.615697 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:20 old-k8s-version-789808 kubelet[660]: E0408 19:30:20.304595 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.616425 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:26 old-k8s-version-789808 kubelet[660]: E0408 19:30:26.799457 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.616854 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:29 old-k8s-version-789808 kubelet[660]: E0408 19:30:29.706613 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.617070 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:31 old-k8s-version-789808 kubelet[660]: E0408 19:30:31.304384 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.620863 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:40 old-k8s-version-789808 kubelet[660]: E0408 19:30:40.304513 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.623388 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:44 old-k8s-version-789808 kubelet[660]: E0408 19:30:44.317089 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.623750 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:51 old-k8s-version-789808 kubelet[660]: E0408 19:30:51.304236 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.623962 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:59 old-k8s-version-789808 kubelet[660]: E0408 19:30:59.309103 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.624338 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:03 old-k8s-version-789808 kubelet[660]: E0408 19:31:03.304045 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.624612 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:12 old-k8s-version-789808 kubelet[660]: E0408 19:31:12.304580 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.625257 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:18 old-k8s-version-789808 kubelet[660]: E0408 19:31:18.945002 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.625627 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:19 old-k8s-version-789808 kubelet[660]: E0408 19:31:19.949074 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.625839 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:26 old-k8s-version-789808 kubelet[660]: E0408 19:31:26.304537 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.626202 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:32 old-k8s-version-789808 kubelet[660]: E0408 19:31:32.304493 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.626414 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:37 old-k8s-version-789808 kubelet[660]: E0408 19:31:37.304399 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.626784 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:43 old-k8s-version-789808 kubelet[660]: E0408 19:31:43.304105 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.627000 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:49 old-k8s-version-789808 kubelet[660]: E0408 19:31:49.304415 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.627354 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:58 old-k8s-version-789808 kubelet[660]: E0408 19:31:58.313363 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.627563 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:00 old-k8s-version-789808 kubelet[660]: E0408 19:32:00.315122 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.627917 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:10 old-k8s-version-789808 kubelet[660]: E0408 19:32:10.304077 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.630429 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:13 old-k8s-version-789808 kubelet[660]: E0408 19:32:13.313444 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.630798 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:25 old-k8s-version-789808 kubelet[660]: E0408 19:32:25.304097 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.631011 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:28 old-k8s-version-789808 kubelet[660]: E0408 19:32:28.305180 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.631369 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:37 old-k8s-version-789808 kubelet[660]: E0408 19:32:37.304121 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.631578 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:39 old-k8s-version-789808 kubelet[660]: E0408 19:32:39.304389 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.631921 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:51 old-k8s-version-789808 kubelet[660]: E0408 19:32:51.305084 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.632405 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:52 old-k8s-version-789808 kubelet[660]: E0408 19:32:52.228011 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.632757 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:59 old-k8s-version-789808 kubelet[660]: E0408 19:32:59.707003 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.632967 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:03 old-k8s-version-789808 kubelet[660]: E0408 19:33:03.304676 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.633319 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:12 old-k8s-version-789808 kubelet[660]: E0408 19:33:12.304502 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.633530 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:17 old-k8s-version-789808 kubelet[660]: E0408 19:33:17.304317 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.633890 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:23 old-k8s-version-789808 kubelet[660]: E0408 19:33:23.304147 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.634100 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:28 old-k8s-version-789808 kubelet[660]: E0408 19:33:28.306566 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.634458 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:36 old-k8s-version-789808 kubelet[660]: E0408 19:33:36.304754 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.634749 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:42 old-k8s-version-789808 kubelet[660]: E0408 19:33:42.305165 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.635191 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:51 old-k8s-version-789808 kubelet[660]: E0408 19:33:51.304494 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.635410 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:55 old-k8s-version-789808 kubelet[660]: E0408 19:33:55.304450 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.635766 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:05 old-k8s-version-789808 kubelet[660]: E0408 19:34:05.304139 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.635983 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:10 old-k8s-version-789808 kubelet[660]: E0408 19:34:10.304418 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.636336 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.636548 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.636902 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.637112 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0408 19:34:39.637137 800094 logs.go:123] Gathering logs for describe nodes ...
I0408 19:34:39.637164 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0408 19:34:39.853721 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:39.853918 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0408 19:34:39.854005 800094 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0408 19:34:39.854170 800094 out.go:270] Apr 08 19:34:10 old-k8s-version-789808 kubelet[660]: E0408 19:34:10.304418 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:10 old-k8s-version-789808 kubelet[660]: E0408 19:34:10.304418 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.854218 800094 out.go:270] Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.854275 800094 out.go:270] Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.854315 800094 out.go:270] Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.854348 800094 out.go:270] Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0408 19:34:39.854395 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:39.854416 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 19:34:49.859697 800094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0408 19:34:49.876565 800094 api_server.go:72] duration metric: took 5m59.255265736s to wait for apiserver process to appear ...
I0408 19:34:49.876598 800094 api_server.go:88] waiting for apiserver healthz status ...
I0408 19:34:49.876632 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0408 19:34:49.876718 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0408 19:34:49.939820 800094 cri.go:89] found id: "301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39"
I0408 19:34:49.939851 800094 cri.go:89] found id: "c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b"
I0408 19:34:49.939856 800094 cri.go:89] found id: ""
I0408 19:34:49.939864 800094 logs.go:282] 2 containers: [301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39 c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b]
I0408 19:34:49.939947 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:49.947167 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:49.954567 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0408 19:34:49.954661 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0408 19:34:50.025717 800094 cri.go:89] found id: "9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a"
I0408 19:34:50.025743 800094 cri.go:89] found id: "8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:34:50.025749 800094 cri.go:89] found id: ""
I0408 19:34:50.025756 800094 logs.go:282] 2 containers: [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a 8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca]
I0408 19:34:50.025822 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.035367 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.041970 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0408 19:34:50.042047 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0408 19:34:50.122321 800094 cri.go:89] found id: "c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439"
I0408 19:34:50.122342 800094 cri.go:89] found id: "a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:34:50.122347 800094 cri.go:89] found id: ""
I0408 19:34:50.122354 800094 logs.go:282] 2 containers: [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439 a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d]
I0408 19:34:50.122413 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.126523 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.141640 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0408 19:34:50.141714 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0408 19:34:50.247771 800094 cri.go:89] found id: "1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376"
I0408 19:34:50.247790 800094 cri.go:89] found id: "e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:34:50.247795 800094 cri.go:89] found id: ""
I0408 19:34:50.247803 800094 logs.go:282] 2 containers: [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376 e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0]
I0408 19:34:50.247859 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.251752 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.255498 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0408 19:34:50.255559 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0408 19:34:50.302738 800094 cri.go:89] found id: "b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2"
I0408 19:34:50.302758 800094 cri.go:89] found id: "866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:34:50.302763 800094 cri.go:89] found id: ""
I0408 19:34:50.302770 800094 logs.go:282] 2 containers: [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2 866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e]
I0408 19:34:50.302827 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.313030 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.316748 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0408 19:34:50.316818 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0408 19:34:50.378626 800094 cri.go:89] found id: "a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7"
I0408 19:34:50.378657 800094 cri.go:89] found id: "38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:34:50.378663 800094 cri.go:89] found id: ""
I0408 19:34:50.378670 800094 logs.go:282] 2 containers: [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7 38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6]
I0408 19:34:50.378728 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.399304 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.403639 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0408 19:34:50.403711 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0408 19:34:50.488910 800094 cri.go:89] found id: "f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98"
I0408 19:34:50.488929 800094 cri.go:89] found id: "f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:34:50.488933 800094 cri.go:89] found id: ""
I0408 19:34:50.488940 800094 logs.go:282] 2 containers: [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98 f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c]
I0408 19:34:50.489012 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.497668 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.510988 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0408 19:34:50.511062 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0408 19:34:50.572505 800094 cri.go:89] found id: "ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc"
I0408 19:34:50.572525 800094 cri.go:89] found id: ""
I0408 19:34:50.572533 800094 logs.go:282] 1 containers: [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc]
I0408 19:34:50.572589 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.576551 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0408 19:34:50.576622 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0408 19:34:50.632631 800094 cri.go:89] found id: "4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548"
I0408 19:34:50.632649 800094 cri.go:89] found id: "0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3"
I0408 19:34:50.632654 800094 cri.go:89] found id: ""
I0408 19:34:50.632662 800094 logs.go:282] 2 containers: [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548 0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3]
I0408 19:34:50.632716 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.636840 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.644220 800094 logs.go:123] Gathering logs for kubelet ...
I0408 19:34:50.644252 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0408 19:34:50.705436 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.448599 660 reflector.go:138] object-"kube-system"/"kube-proxy-token-hrsw6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-hrsw6" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.705683 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.454791 660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-62qzt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-62qzt" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.709331 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566273 660 reflector.go:138] object-"default"/"default-token-9t7wk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-9t7wk" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.709547 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566209 660 reflector.go:138] object-"kube-system"/"kindnet-token-f74cf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-f74cf" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.709751 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566367 660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.709962 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566456 660 reflector.go:138] object-"kube-system"/"coredns-token-tjnm4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tjnm4" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.710163 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566571 660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.710609 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566704 660 reflector.go:138] object-"kube-system"/"metrics-server-token-ntl9w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-ntl9w" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.720297 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:11 old-k8s-version-789808 kubelet[660]: E0408 19:29:11.608088 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.720564 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:12 old-k8s-version-789808 kubelet[660]: E0408 19:29:12.520827 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.724426 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:25 old-k8s-version-789808 kubelet[660]: E0408 19:29:25.314436 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.726461 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:36 old-k8s-version-789808 kubelet[660]: E0408 19:29:36.623687 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.727070 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:37 old-k8s-version-789808 kubelet[660]: E0408 19:29:37.628270 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.728686 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:39 old-k8s-version-789808 kubelet[660]: E0408 19:29:39.635668 660 pod_workers.go:191] Error syncing pod 8ea68ede-5c89-4238-b5e4-9811e9a34fc4 ("storage-provisioner_kube-system(8ea68ede-5c89-4238-b5e4-9811e9a34fc4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8ea68ede-5c89-4238-b5e4-9811e9a34fc4)"
W0408 19:34:50.729084 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:39 old-k8s-version-789808 kubelet[660]: E0408 19:29:39.706714 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.729299 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:40 old-k8s-version-789808 kubelet[660]: E0408 19:29:40.304587 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.733634 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:54 old-k8s-version-789808 kubelet[660]: E0408 19:29:54.338620 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.734496 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:55 old-k8s-version-789808 kubelet[660]: E0408 19:29:55.704770 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.734883 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:59 old-k8s-version-789808 kubelet[660]: E0408 19:29:59.707332 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.735097 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-789808 kubelet[660]: E0408 19:30:09.304394 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.735453 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:15 old-k8s-version-789808 kubelet[660]: E0408 19:30:15.304089 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.735665 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:20 old-k8s-version-789808 kubelet[660]: E0408 19:30:20.304595 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.736288 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:26 old-k8s-version-789808 kubelet[660]: E0408 19:30:26.799457 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.736651 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:29 old-k8s-version-789808 kubelet[660]: E0408 19:30:29.706613 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.736867 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:31 old-k8s-version-789808 kubelet[660]: E0408 19:30:31.304384 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.737287 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:40 old-k8s-version-789808 kubelet[660]: E0408 19:30:40.304513 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.741676 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:44 old-k8s-version-789808 kubelet[660]: E0408 19:30:44.317089 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.742068 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:51 old-k8s-version-789808 kubelet[660]: E0408 19:30:51.304236 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.742291 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:59 old-k8s-version-789808 kubelet[660]: E0408 19:30:59.309103 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.742714 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:03 old-k8s-version-789808 kubelet[660]: E0408 19:31:03.304045 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.742928 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:12 old-k8s-version-789808 kubelet[660]: E0408 19:31:12.304580 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.743542 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:18 old-k8s-version-789808 kubelet[660]: E0408 19:31:18.945002 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.743897 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:19 old-k8s-version-789808 kubelet[660]: E0408 19:31:19.949074 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.744114 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:26 old-k8s-version-789808 kubelet[660]: E0408 19:31:26.304537 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.744466 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:32 old-k8s-version-789808 kubelet[660]: E0408 19:31:32.304493 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.744676 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:37 old-k8s-version-789808 kubelet[660]: E0408 19:31:37.304399 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.745031 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:43 old-k8s-version-789808 kubelet[660]: E0408 19:31:43.304105 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.745239 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:49 old-k8s-version-789808 kubelet[660]: E0408 19:31:49.304415 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.745591 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:58 old-k8s-version-789808 kubelet[660]: E0408 19:31:58.313363 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.745800 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:00 old-k8s-version-789808 kubelet[660]: E0408 19:32:00.315122 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.746155 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:10 old-k8s-version-789808 kubelet[660]: E0408 19:32:10.304077 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.748636 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:13 old-k8s-version-789808 kubelet[660]: E0408 19:32:13.313444 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.748991 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:25 old-k8s-version-789808 kubelet[660]: E0408 19:32:25.304097 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.749202 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:28 old-k8s-version-789808 kubelet[660]: E0408 19:32:28.305180 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.749569 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:37 old-k8s-version-789808 kubelet[660]: E0408 19:32:37.304121 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.749793 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:39 old-k8s-version-789808 kubelet[660]: E0408 19:32:39.304389 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.750864 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:51 old-k8s-version-789808 kubelet[660]: E0408 19:32:51.305084 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.751336 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:52 old-k8s-version-789808 kubelet[660]: E0408 19:32:52.228011 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.751665 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:59 old-k8s-version-789808 kubelet[660]: E0408 19:32:59.707003 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.751853 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:03 old-k8s-version-789808 kubelet[660]: E0408 19:33:03.304676 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.752181 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:12 old-k8s-version-789808 kubelet[660]: E0408 19:33:12.304502 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.752367 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:17 old-k8s-version-789808 kubelet[660]: E0408 19:33:17.304317 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.752693 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:23 old-k8s-version-789808 kubelet[660]: E0408 19:33:23.304147 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.752877 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:28 old-k8s-version-789808 kubelet[660]: E0408 19:33:28.306566 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.753204 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:36 old-k8s-version-789808 kubelet[660]: E0408 19:33:36.304754 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.753387 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:42 old-k8s-version-789808 kubelet[660]: E0408 19:33:42.305165 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.753714 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:51 old-k8s-version-789808 kubelet[660]: E0408 19:33:51.304494 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.753899 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:55 old-k8s-version-789808 kubelet[660]: E0408 19:33:55.304450 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.754225 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:05 old-k8s-version-789808 kubelet[660]: E0408 19:34:05.304139 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.754410 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:10 old-k8s-version-789808 kubelet[660]: E0408 19:34:10.304418 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.754760 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.754946 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.755274 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.755458 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.755783 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:45 old-k8s-version-789808 kubelet[660]: E0408 19:34:45.305341 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
I0408 19:34:50.755794 800094 logs.go:123] Gathering logs for describe nodes ...
I0408 19:34:50.755808 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0408 19:34:51.020353 800094 logs.go:123] Gathering logs for kube-scheduler [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376] ...
I0408 19:34:51.020407 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376"
I0408 19:34:51.106112 800094 logs.go:123] Gathering logs for kube-scheduler [e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0] ...
I0408 19:34:51.106196 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:34:51.175226 800094 logs.go:123] Gathering logs for kube-controller-manager [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7] ...
I0408 19:34:51.175301 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7"
I0408 19:34:51.269888 800094 logs.go:123] Gathering logs for storage-provisioner [0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3] ...
I0408 19:34:51.269921 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3"
I0408 19:34:51.329644 800094 logs.go:123] Gathering logs for container status ...
I0408 19:34:51.329674 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0408 19:34:51.394734 800094 logs.go:123] Gathering logs for dmesg ...
I0408 19:34:51.394765 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0408 19:34:51.413970 800094 logs.go:123] Gathering logs for kube-apiserver [c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b] ...
I0408 19:34:51.414006 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b"
I0408 19:34:51.479198 800094 logs.go:123] Gathering logs for coredns [a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d] ...
I0408 19:34:51.479234 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:34:51.524419 800094 logs.go:123] Gathering logs for kube-controller-manager [38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6] ...
I0408 19:34:51.524450 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:34:51.605394 800094 logs.go:123] Gathering logs for kube-apiserver [301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39] ...
I0408 19:34:51.605435 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39"
I0408 19:34:51.700064 800094 logs.go:123] Gathering logs for etcd [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a] ...
I0408 19:34:51.700099 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a"
I0408 19:34:51.757188 800094 logs.go:123] Gathering logs for kube-proxy [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2] ...
I0408 19:34:51.757221 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2"
I0408 19:34:51.822372 800094 logs.go:123] Gathering logs for kindnet [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98] ...
I0408 19:34:51.822407 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98"
I0408 19:34:51.896835 800094 logs.go:123] Gathering logs for kubernetes-dashboard [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc] ...
I0408 19:34:51.896875 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc"
I0408 19:34:51.953497 800094 logs.go:123] Gathering logs for containerd ...
I0408 19:34:51.953532 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0408 19:34:52.016339 800094 logs.go:123] Gathering logs for etcd [8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca] ...
I0408 19:34:52.016381 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:34:52.070934 800094 logs.go:123] Gathering logs for coredns [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439] ...
I0408 19:34:52.070965 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439"
I0408 19:34:52.117207 800094 logs.go:123] Gathering logs for kube-proxy [866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e] ...
I0408 19:34:52.117242 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:34:52.175407 800094 logs.go:123] Gathering logs for kindnet [f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c] ...
I0408 19:34:52.175443 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:34:52.228175 800094 logs.go:123] Gathering logs for storage-provisioner [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548] ...
I0408 19:34:52.228205 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548"
I0408 19:34:52.277406 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:52.277431 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0408 19:34:52.277480 800094 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0408 19:34:52.277495 800094 out.go:270] Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:52.277500 800094 out.go:270] Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:52.277520 800094 out.go:270] Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:52.277526 800094 out.go:270] Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:52.277538 800094 out.go:270] Apr 08 19:34:45 old-k8s-version-789808 kubelet[660]: E0408 19:34:45.305341 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:45 old-k8s-version-789808 kubelet[660]: E0408 19:34:45.305341 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
I0408 19:34:52.277543 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:52.277548 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 19:35:02.279239 800094 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0408 19:35:02.292674 800094 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0408 19:35:02.297750 800094 out.go:201]
W0408 19:35:02.300964 800094 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0408 19:35:02.301075 800094 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0408 19:35:02.301137 800094 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0408 19:35:02.301175 800094 out.go:270] *
*
W0408 19:35:02.302335 800094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 19:35:02.307332 800094 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-789808 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-789808
helpers_test.go:235: (dbg) docker inspect old-k8s-version-789808:
-- stdout --
[
{
"Id": "e0f02df7d5d8825e40d33e84b24afe9af19b50efa2b4041776bcd9765da2f9d7",
"Created": "2025-04-08T19:25:39.80707178Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 800219,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-08T19:28:42.877104962Z",
"FinishedAt": "2025-04-08T19:28:41.623512691Z"
},
"Image": "sha256:e51065ad0661308920dfd7c7ddda445e530a6bf56321f8317cb47e1df0975e7c",
"ResolvConfPath": "/var/lib/docker/containers/e0f02df7d5d8825e40d33e84b24afe9af19b50efa2b4041776bcd9765da2f9d7/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e0f02df7d5d8825e40d33e84b24afe9af19b50efa2b4041776bcd9765da2f9d7/hostname",
"HostsPath": "/var/lib/docker/containers/e0f02df7d5d8825e40d33e84b24afe9af19b50efa2b4041776bcd9765da2f9d7/hosts",
"LogPath": "/var/lib/docker/containers/e0f02df7d5d8825e40d33e84b24afe9af19b50efa2b4041776bcd9765da2f9d7/e0f02df7d5d8825e40d33e84b24afe9af19b50efa2b4041776bcd9765da2f9d7-json.log",
"Name": "/old-k8s-version-789808",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-789808:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-789808",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "e0f02df7d5d8825e40d33e84b24afe9af19b50efa2b4041776bcd9765da2f9d7",
"LowerDir": "/var/lib/docker/overlay2/ed922579f8d1977ba29e4e4c86b6db5db9fc96d48eb171e859963c9f37d633f2-init/diff:/var/lib/docker/overlay2/e716a493f5977473c09a52683744cfce333a9470c0dcdb40039076bbb449d8f9/diff",
"MergedDir": "/var/lib/docker/overlay2/ed922579f8d1977ba29e4e4c86b6db5db9fc96d48eb171e859963c9f37d633f2/merged",
"UpperDir": "/var/lib/docker/overlay2/ed922579f8d1977ba29e4e4c86b6db5db9fc96d48eb171e859963c9f37d633f2/diff",
"WorkDir": "/var/lib/docker/overlay2/ed922579f8d1977ba29e4e4c86b6db5db9fc96d48eb171e859963c9f37d633f2/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-789808",
"Source": "/var/lib/docker/volumes/old-k8s-version-789808/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-789808",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-789808",
"name.minikube.sigs.k8s.io": "old-k8s-version-789808",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "591cc2c6c7321f9254bd9d39ba3f4f78155c6e3a635ee1d579b297e02a429587",
"SandboxKey": "/var/run/docker/netns/591cc2c6c732",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33799"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33800"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33803"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33801"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33802"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-789808": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "72:19:b0:15:ab:9a",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "8b0e658ec51b06f367fceccaa71fc3db710cda73dc2e87ccf6967f0273ff5af8",
"EndpointID": "cd7769da60bdb6552877aae1e0c28a01756a64c05b9836601f6c91cab214521c",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-789808",
"e0f02df7d5d8"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-789808 -n old-k8s-version-789808
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-789808 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-789808 logs -n 25: (3.326846212s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-477648 | cert-expiration-477648 | jenkins | v1.35.0 | 08 Apr 25 19:24 UTC | 08 Apr 25 19:24 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-252541 | force-systemd-env-252541 | jenkins | v1.35.0 | 08 Apr 25 19:24 UTC | 08 Apr 25 19:24 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-252541 | force-systemd-env-252541 | jenkins | v1.35.0 | 08 Apr 25 19:24 UTC | 08 Apr 25 19:24 UTC |
| start | -p cert-options-848367 | cert-options-848367 | jenkins | v1.35.0 | 08 Apr 25 19:24 UTC | 08 Apr 25 19:25 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-848367 ssh | cert-options-848367 | jenkins | v1.35.0 | 08 Apr 25 19:25 UTC | 08 Apr 25 19:25 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-848367 -- sudo | cert-options-848367 | jenkins | v1.35.0 | 08 Apr 25 19:25 UTC | 08 Apr 25 19:25 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-848367 | cert-options-848367 | jenkins | v1.35.0 | 08 Apr 25 19:25 UTC | 08 Apr 25 19:25 UTC |
| start | -p old-k8s-version-789808 | old-k8s-version-789808 | jenkins | v1.35.0 | 08 Apr 25 19:25 UTC | 08 Apr 25 19:28 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-477648 | cert-expiration-477648 | jenkins | v1.35.0 | 08 Apr 25 19:27 UTC | 08 Apr 25 19:28 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-477648 | cert-expiration-477648 | jenkins | v1.35.0 | 08 Apr 25 19:28 UTC | 08 Apr 25 19:28 UTC |
| start | -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:28 UTC | 08 Apr 25 19:29 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-789808 | old-k8s-version-789808 | jenkins | v1.35.0 | 08 Apr 25 19:28 UTC | 08 Apr 25 19:28 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-789808 | old-k8s-version-789808 | jenkins | v1.35.0 | 08 Apr 25 19:28 UTC | 08 Apr 25 19:28 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-789808 | old-k8s-version-789808 | jenkins | v1.35.0 | 08 Apr 25 19:28 UTC | 08 Apr 25 19:28 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-789808 | old-k8s-version-789808 | jenkins | v1.35.0 | 08 Apr 25 19:28 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:29 UTC | 08 Apr 25 19:29 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:29 UTC | 08 Apr 25 19:29 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:29 UTC | 08 Apr 25 19:29 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:29 UTC | 08 Apr 25 19:34 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| image | no-preload-653830 image list | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
| delete | -p no-preload-653830 | no-preload-653830 | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | 08 Apr 25 19:34 UTC |
| start | -p embed-certs-504925 | embed-certs-504925 | jenkins | v1.35.0 | 08 Apr 25 19:34 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/08 19:34:38
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0408 19:34:38.299058 810787 out.go:345] Setting OutFile to fd 1 ...
I0408 19:34:38.299182 810787 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 19:34:38.299188 810787 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:38.299193 810787 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 19:34:38.299476 810787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20604-581234/.minikube/bin
I0408 19:34:38.299892 810787 out.go:352] Setting JSON to false
I0408 19:34:38.300911 810787 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11830,"bootTime":1744129049,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0408 19:34:38.300992 810787 start.go:139] virtualization:
I0408 19:34:38.304815 810787 out.go:177] * [embed-certs-504925] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0408 19:34:38.309059 810787 out.go:177] - MINIKUBE_LOCATION=20604
I0408 19:34:38.309176 810787 notify.go:220] Checking for updates...
I0408 19:34:38.315452 810787 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0408 19:34:38.318620 810787 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20604-581234/kubeconfig
I0408 19:34:38.321759 810787 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20604-581234/.minikube
I0408 19:34:38.324816 810787 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0408 19:34:38.327809 810787 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0408 19:34:38.331393 810787 config.go:182] Loaded profile config "old-k8s-version-789808": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0408 19:34:38.331489 810787 driver.go:394] Setting default libvirt URI to qemu:///system
I0408 19:34:38.366627 810787 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0408 19:34:38.366773 810787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0408 19:34:38.475988 810787 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-08 19:34:38.463632964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0408 19:34:38.476114 810787 docker.go:318] overlay module found
I0408 19:34:38.479546 810787 out.go:177] * Using the docker driver based on user configuration
I0408 19:34:38.482418 810787 start.go:297] selected driver: docker
I0408 19:34:38.482441 810787 start.go:901] validating driver "docker" against <nil>
I0408 19:34:38.482463 810787 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0408 19:34:38.483240 810787 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0408 19:34:38.566785 810787 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-08 19:34:38.556903473 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0408 19:34:38.566953 810787 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0408 19:34:38.567172 810787 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0408 19:34:38.570159 810787 out.go:177] * Using Docker driver with root privileges
I0408 19:34:38.572938 810787 cni.go:84] Creating CNI manager for ""
I0408 19:34:38.573014 810787 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0408 19:34:38.573029 810787 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0408 19:34:38.573105 810787 start.go:340] cluster config:
{Name:embed-certs-504925 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-504925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 19:34:38.576210 810787 out.go:177] * Starting "embed-certs-504925" primary control-plane node in "embed-certs-504925" cluster
I0408 19:34:38.579113 810787 cache.go:121] Beginning downloading kic base image for docker with containerd
I0408 19:34:38.582060 810787 out.go:177] * Pulling base image v0.0.46-1744107393-20604 ...
I0408 19:34:38.584886 810787 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0408 19:34:38.584950 810787 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20604-581234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4
I0408 19:34:38.584964 810787 cache.go:56] Caching tarball of preloaded images
I0408 19:34:38.585064 810787 preload.go:172] Found /home/jenkins/minikube-integration/20604-581234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0408 19:34:38.585080 810787 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on containerd
I0408 19:34:38.585189 810787 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/config.json ...
I0408 19:34:38.585214 810787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/config.json: {Name:mk42b08df71ec432b7953974388585386391e0c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:34:38.585382 810787 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon
I0408 19:34:38.610871 810787 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a in local docker daemon, skipping pull
I0408 19:34:38.610895 810787 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a exists in daemon, skipping load
I0408 19:34:38.610909 810787 cache.go:230] Successfully downloaded all kic artifacts
I0408 19:34:38.610936 810787 start.go:360] acquireMachinesLock for embed-certs-504925: {Name:mkd828de0a859d7fe3db6df09436734b060b2bbb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0408 19:34:38.614613 810787 start.go:364] duration metric: took 3.647701ms to acquireMachinesLock for "embed-certs-504925"
I0408 19:34:38.614668 810787 start.go:93] Provisioning new machine with config: &{Name:embed-certs-504925 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-504925 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0408 19:34:38.614743 810787 start.go:125] createHost starting for "" (driver="docker")
I0408 19:34:37.530200 800094 cri.go:89] found id: "9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a"
I0408 19:34:37.530276 800094 cri.go:89] found id: "8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:34:37.530295 800094 cri.go:89] found id: ""
I0408 19:34:37.530319 800094 logs.go:282] 2 containers: [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a 8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca]
I0408 19:34:37.530399 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.534489 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.540237 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0408 19:34:37.540357 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0408 19:34:37.614840 800094 cri.go:89] found id: "c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439"
I0408 19:34:37.614914 800094 cri.go:89] found id: "a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:34:37.614933 800094 cri.go:89] found id: ""
I0408 19:34:37.614955 800094 logs.go:282] 2 containers: [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439 a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d]
I0408 19:34:37.615039 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.619956 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.624306 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0408 19:34:37.624433 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0408 19:34:37.686468 800094 cri.go:89] found id: "1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376"
I0408 19:34:37.686504 800094 cri.go:89] found id: "e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:34:37.686509 800094 cri.go:89] found id: ""
I0408 19:34:37.686517 800094 logs.go:282] 2 containers: [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376 e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0]
I0408 19:34:37.686572 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.691909 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.698018 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0408 19:34:37.698088 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0408 19:34:37.755563 800094 cri.go:89] found id: "b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2"
I0408 19:34:37.755582 800094 cri.go:89] found id: "866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:34:37.755587 800094 cri.go:89] found id: ""
I0408 19:34:37.755595 800094 logs.go:282] 2 containers: [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2 866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e]
I0408 19:34:37.755657 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.759379 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.762887 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0408 19:34:37.762962 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0408 19:34:37.823868 800094 cri.go:89] found id: "a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7"
I0408 19:34:37.823888 800094 cri.go:89] found id: "38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:34:37.823892 800094 cri.go:89] found id: ""
I0408 19:34:37.823900 800094 logs.go:282] 2 containers: [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7 38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6]
I0408 19:34:37.823960 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.834230 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.838096 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0408 19:34:37.838167 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0408 19:34:37.904103 800094 cri.go:89] found id: "f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98"
I0408 19:34:37.904126 800094 cri.go:89] found id: "f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:34:37.904132 800094 cri.go:89] found id: ""
I0408 19:34:37.904139 800094 logs.go:282] 2 containers: [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98 f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c]
I0408 19:34:37.904203 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.908566 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.913158 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0408 19:34:37.913229 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0408 19:34:37.962440 800094 cri.go:89] found id: "ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc"
I0408 19:34:37.962459 800094 cri.go:89] found id: ""
I0408 19:34:37.962467 800094 logs.go:282] 1 containers: [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc]
I0408 19:34:37.962552 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:37.966467 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0408 19:34:37.966549 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0408 19:34:38.024616 800094 cri.go:89] found id: "4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548"
I0408 19:34:38.024658 800094 cri.go:89] found id: "0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3"
I0408 19:34:38.024667 800094 cri.go:89] found id: ""
I0408 19:34:38.024676 800094 logs.go:282] 2 containers: [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548 0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3]
I0408 19:34:38.024759 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:38.029591 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:38.034251 800094 logs.go:123] Gathering logs for coredns [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439] ...
I0408 19:34:38.034281 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439"
I0408 19:34:38.092082 800094 logs.go:123] Gathering logs for coredns [a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d] ...
I0408 19:34:38.092108 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:34:38.150608 800094 logs.go:123] Gathering logs for kube-controller-manager [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7] ...
I0408 19:34:38.150633 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7"
I0408 19:34:38.244146 800094 logs.go:123] Gathering logs for kindnet [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98] ...
I0408 19:34:38.244222 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98"
I0408 19:34:38.335533 800094 logs.go:123] Gathering logs for kubernetes-dashboard [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc] ...
I0408 19:34:38.335717 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc"
I0408 19:34:38.415816 800094 logs.go:123] Gathering logs for storage-provisioner [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548] ...
I0408 19:34:38.415843 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548"
I0408 19:34:38.478451 800094 logs.go:123] Gathering logs for dmesg ...
I0408 19:34:38.478509 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0408 19:34:38.512484 800094 logs.go:123] Gathering logs for etcd [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a] ...
I0408 19:34:38.512511 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a"
I0408 19:34:38.602070 800094 logs.go:123] Gathering logs for kube-scheduler [e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0] ...
I0408 19:34:38.602154 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:34:38.670831 800094 logs.go:123] Gathering logs for kube-proxy [866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e] ...
I0408 19:34:38.670874 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:34:38.751971 800094 logs.go:123] Gathering logs for storage-provisioner [0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3] ...
I0408 19:34:38.751998 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3"
I0408 19:34:38.814079 800094 logs.go:123] Gathering logs for kube-apiserver [301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39] ...
I0408 19:34:38.814112 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39"
I0408 19:34:38.904557 800094 logs.go:123] Gathering logs for etcd [8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca] ...
I0408 19:34:38.904594 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:34:38.977383 800094 logs.go:123] Gathering logs for kube-controller-manager [38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6] ...
I0408 19:34:38.977473 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:34:39.073729 800094 logs.go:123] Gathering logs for containerd ...
I0408 19:34:39.073823 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0408 19:34:39.145383 800094 logs.go:123] Gathering logs for container status ...
I0408 19:34:39.145453 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0408 19:34:39.229500 800094 logs.go:123] Gathering logs for kube-apiserver [c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b] ...
I0408 19:34:39.229581 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b"
I0408 19:34:39.307911 800094 logs.go:123] Gathering logs for kube-scheduler [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376] ...
I0408 19:34:39.307970 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376"
I0408 19:34:39.369341 800094 logs.go:123] Gathering logs for kube-proxy [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2] ...
I0408 19:34:39.369372 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2"
I0408 19:34:39.427556 800094 logs.go:123] Gathering logs for kindnet [f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c] ...
I0408 19:34:39.427582 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:34:39.505938 800094 logs.go:123] Gathering logs for kubelet ...
I0408 19:34:39.506015 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0408 19:34:39.571204 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.448599 660 reflector.go:138] object-"kube-system"/"kube-proxy-token-hrsw6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-hrsw6" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.574379 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.454791 660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-62qzt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-62qzt" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.582617 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566273 660 reflector.go:138] object-"default"/"default-token-9t7wk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-9t7wk" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.582916 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566209 660 reflector.go:138] object-"kube-system"/"kindnet-token-f74cf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-f74cf" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.583148 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566367 660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.583386 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566456 660 reflector.go:138] object-"kube-system"/"coredns-token-tjnm4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tjnm4" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.583658 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566571 660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.583910 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566704 660 reflector.go:138] object-"kube-system"/"metrics-server-token-ntl9w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-ntl9w" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:39.593831 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:11 old-k8s-version-789808 kubelet[660]: E0408 19:29:11.608088 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.594100 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:12 old-k8s-version-789808 kubelet[660]: E0408 19:29:12.520827 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.597419 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:25 old-k8s-version-789808 kubelet[660]: E0408 19:29:25.314436 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.607006 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:36 old-k8s-version-789808 kubelet[660]: E0408 19:29:36.623687 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.607573 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:37 old-k8s-version-789808 kubelet[660]: E0408 19:29:37.628270 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.609574 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:39 old-k8s-version-789808 kubelet[660]: E0408 19:29:39.635668 660 pod_workers.go:191] Error syncing pod 8ea68ede-5c89-4238-b5e4-9811e9a34fc4 ("storage-provisioner_kube-system(8ea68ede-5c89-4238-b5e4-9811e9a34fc4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8ea68ede-5c89-4238-b5e4-9811e9a34fc4)"
W0408 19:34:39.610010 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:39 old-k8s-version-789808 kubelet[660]: E0408 19:29:39.706714 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.610228 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:40 old-k8s-version-789808 kubelet[660]: E0408 19:29:40.304587 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.613405 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:54 old-k8s-version-789808 kubelet[660]: E0408 19:29:54.338620 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.614050 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:55 old-k8s-version-789808 kubelet[660]: E0408 19:29:55.704770 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.614413 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:59 old-k8s-version-789808 kubelet[660]: E0408 19:29:59.707332 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.615037 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-789808 kubelet[660]: E0408 19:30:09.304394 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.615459 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:15 old-k8s-version-789808 kubelet[660]: E0408 19:30:15.304089 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.615697 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:20 old-k8s-version-789808 kubelet[660]: E0408 19:30:20.304595 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.616425 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:26 old-k8s-version-789808 kubelet[660]: E0408 19:30:26.799457 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.616854 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:29 old-k8s-version-789808 kubelet[660]: E0408 19:30:29.706613 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.617070 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:31 old-k8s-version-789808 kubelet[660]: E0408 19:30:31.304384 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.620863 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:40 old-k8s-version-789808 kubelet[660]: E0408 19:30:40.304513 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.623388 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:44 old-k8s-version-789808 kubelet[660]: E0408 19:30:44.317089 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.623750 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:51 old-k8s-version-789808 kubelet[660]: E0408 19:30:51.304236 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.623962 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:59 old-k8s-version-789808 kubelet[660]: E0408 19:30:59.309103 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.624338 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:03 old-k8s-version-789808 kubelet[660]: E0408 19:31:03.304045 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.624612 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:12 old-k8s-version-789808 kubelet[660]: E0408 19:31:12.304580 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.625257 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:18 old-k8s-version-789808 kubelet[660]: E0408 19:31:18.945002 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.625627 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:19 old-k8s-version-789808 kubelet[660]: E0408 19:31:19.949074 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.625839 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:26 old-k8s-version-789808 kubelet[660]: E0408 19:31:26.304537 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.626202 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:32 old-k8s-version-789808 kubelet[660]: E0408 19:31:32.304493 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.626414 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:37 old-k8s-version-789808 kubelet[660]: E0408 19:31:37.304399 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.626784 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:43 old-k8s-version-789808 kubelet[660]: E0408 19:31:43.304105 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.627000 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:49 old-k8s-version-789808 kubelet[660]: E0408 19:31:49.304415 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.627354 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:58 old-k8s-version-789808 kubelet[660]: E0408 19:31:58.313363 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.627563 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:00 old-k8s-version-789808 kubelet[660]: E0408 19:32:00.315122 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.627917 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:10 old-k8s-version-789808 kubelet[660]: E0408 19:32:10.304077 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.630429 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:13 old-k8s-version-789808 kubelet[660]: E0408 19:32:13.313444 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:39.630798 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:25 old-k8s-version-789808 kubelet[660]: E0408 19:32:25.304097 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.631011 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:28 old-k8s-version-789808 kubelet[660]: E0408 19:32:28.305180 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.631369 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:37 old-k8s-version-789808 kubelet[660]: E0408 19:32:37.304121 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.631578 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:39 old-k8s-version-789808 kubelet[660]: E0408 19:32:39.304389 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.631921 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:51 old-k8s-version-789808 kubelet[660]: E0408 19:32:51.305084 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.632405 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:52 old-k8s-version-789808 kubelet[660]: E0408 19:32:52.228011 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.632757 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:59 old-k8s-version-789808 kubelet[660]: E0408 19:32:59.707003 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.632967 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:03 old-k8s-version-789808 kubelet[660]: E0408 19:33:03.304676 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.633319 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:12 old-k8s-version-789808 kubelet[660]: E0408 19:33:12.304502 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.633530 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:17 old-k8s-version-789808 kubelet[660]: E0408 19:33:17.304317 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.633890 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:23 old-k8s-version-789808 kubelet[660]: E0408 19:33:23.304147 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.634100 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:28 old-k8s-version-789808 kubelet[660]: E0408 19:33:28.306566 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.634458 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:36 old-k8s-version-789808 kubelet[660]: E0408 19:33:36.304754 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.634749 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:42 old-k8s-version-789808 kubelet[660]: E0408 19:33:42.305165 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.635191 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:51 old-k8s-version-789808 kubelet[660]: E0408 19:33:51.304494 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.635410 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:55 old-k8s-version-789808 kubelet[660]: E0408 19:33:55.304450 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.635766 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:05 old-k8s-version-789808 kubelet[660]: E0408 19:34:05.304139 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.635983 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:10 old-k8s-version-789808 kubelet[660]: E0408 19:34:10.304418 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.636336 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.636548 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.636902 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.637112 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0408 19:34:39.637137 800094 logs.go:123] Gathering logs for describe nodes ...
I0408 19:34:39.637164 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0408 19:34:39.853721 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:39.853918 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0408 19:34:39.854005 800094 out.go:270] X Problems detected in kubelet:
W0408 19:34:39.854170 800094 out.go:270] Apr 08 19:34:10 old-k8s-version-789808 kubelet[660]: E0408 19:34:10.304418 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.854218 800094 out.go:270] Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.854275 800094 out.go:270] Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:39.854315 800094 out.go:270] Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:39.854348 800094 out.go:270] Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0408 19:34:39.854395 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:39.854416 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 19:34:38.618112 810787 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0408 19:34:38.618359 810787 start.go:159] libmachine.API.Create for "embed-certs-504925" (driver="docker")
I0408 19:34:38.618393 810787 client.go:168] LocalClient.Create starting
I0408 19:34:38.618473 810787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem
I0408 19:34:38.618617 810787 main.go:141] libmachine: Decoding PEM data...
I0408 19:34:38.618654 810787 main.go:141] libmachine: Parsing certificate...
I0408 19:34:38.618714 810787 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20604-581234/.minikube/certs/cert.pem
I0408 19:34:38.618734 810787 main.go:141] libmachine: Decoding PEM data...
I0408 19:34:38.618749 810787 main.go:141] libmachine: Parsing certificate...
I0408 19:34:38.619121 810787 cli_runner.go:164] Run: docker network inspect embed-certs-504925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0408 19:34:38.640982 810787 cli_runner.go:211] docker network inspect embed-certs-504925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0408 19:34:38.641058 810787 network_create.go:284] running [docker network inspect embed-certs-504925] to gather additional debugging logs...
I0408 19:34:38.641073 810787 cli_runner.go:164] Run: docker network inspect embed-certs-504925
W0408 19:34:38.658358 810787 cli_runner.go:211] docker network inspect embed-certs-504925 returned with exit code 1
I0408 19:34:38.658385 810787 network_create.go:287] error running [docker network inspect embed-certs-504925]: docker network inspect embed-certs-504925: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-504925 not found
I0408 19:34:38.658398 810787 network_create.go:289] output of [docker network inspect embed-certs-504925]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-504925 not found
** /stderr **
I0408 19:34:38.658531 810787 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0408 19:34:38.685097 810787 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3180f6af3059 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:63:fa:08:58:ad} reservation:<nil>}
I0408 19:34:38.685523 810787 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5fe5baec2959 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:c0:a5:ba:d0:fe} reservation:<nil>}
I0408 19:34:38.685850 810787 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-82662ae0d07a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:66:6c:7a:7a:12:f4} reservation:<nil>}
I0408 19:34:38.686121 810787 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-8b0e658ec51b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e6:9b:e5:77:21:19} reservation:<nil>}
I0408 19:34:38.686581 810787 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a28940}
I0408 19:34:38.686609 810787 network_create.go:124] attempt to create docker network embed-certs-504925 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0408 19:34:38.686678 810787 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-504925 embed-certs-504925
I0408 19:34:38.769874 810787 network_create.go:108] docker network embed-certs-504925 192.168.85.0/24 created
I0408 19:34:38.769905 810787 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-504925" container
I0408 19:34:38.770001 810787 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0408 19:34:38.795426 810787 cli_runner.go:164] Run: docker volume create embed-certs-504925 --label name.minikube.sigs.k8s.io=embed-certs-504925 --label created_by.minikube.sigs.k8s.io=true
I0408 19:34:38.835076 810787 oci.go:103] Successfully created a docker volume embed-certs-504925
I0408 19:34:38.835168 810787 cli_runner.go:164] Run: docker run --rm --name embed-certs-504925-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-504925 --entrypoint /usr/bin/test -v embed-certs-504925:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -d /var/lib
I0408 19:34:39.517389 810787 oci.go:107] Successfully prepared a docker volume embed-certs-504925
I0408 19:34:39.517444 810787 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0408 19:34:39.517477 810787 kic.go:194] Starting extracting preloaded images to volume ...
I0408 19:34:39.517548 810787 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20604-581234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-504925:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir
I0408 19:34:44.222079 810787 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20604-581234/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-504925:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a -I lz4 -xf /preloaded.tar -C /extractDir: (4.704494662s)
I0408 19:34:44.222113 810787 kic.go:203] duration metric: took 4.704643954s to extract preloaded images to volume ...
W0408 19:34:44.222251 810787 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0408 19:34:44.222368 810787 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0408 19:34:44.283045 810787 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-504925 --name embed-certs-504925 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-504925 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-504925 --network embed-certs-504925 --ip 192.168.85.2 --volume embed-certs-504925:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a
I0408 19:34:44.605785 810787 cli_runner.go:164] Run: docker container inspect embed-certs-504925 --format={{.State.Running}}
I0408 19:34:44.630584 810787 cli_runner.go:164] Run: docker container inspect embed-certs-504925 --format={{.State.Status}}
I0408 19:34:44.655316 810787 cli_runner.go:164] Run: docker exec embed-certs-504925 stat /var/lib/dpkg/alternatives/iptables
I0408 19:34:44.730160 810787 oci.go:144] the created container "embed-certs-504925" has a running status.
I0408 19:34:44.730191 810787 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20604-581234/.minikube/machines/embed-certs-504925/id_rsa...
I0408 19:34:45.556216 810787 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20604-581234/.minikube/machines/embed-certs-504925/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0408 19:34:45.586144 810787 cli_runner.go:164] Run: docker container inspect embed-certs-504925 --format={{.State.Status}}
I0408 19:34:45.608380 810787 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0408 19:34:45.608398 810787 kic_runner.go:114] Args: [docker exec --privileged embed-certs-504925 chown docker:docker /home/docker/.ssh/authorized_keys]
I0408 19:34:45.664044 810787 cli_runner.go:164] Run: docker container inspect embed-certs-504925 --format={{.State.Status}}
I0408 19:34:45.688430 810787 machine.go:93] provisionDockerMachine start ...
I0408 19:34:45.688521 810787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-504925
I0408 19:34:45.720016 810787 main.go:141] libmachine: Using SSH client type: native
I0408 19:34:45.720360 810787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33809 <nil> <nil>}
I0408 19:34:45.720375 810787 main.go:141] libmachine: About to run SSH command:
hostname
I0408 19:34:45.862623 810787 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-504925
I0408 19:34:45.862725 810787 ubuntu.go:169] provisioning hostname "embed-certs-504925"
I0408 19:34:45.862815 810787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-504925
I0408 19:34:45.885297 810787 main.go:141] libmachine: Using SSH client type: native
I0408 19:34:45.885950 810787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33809 <nil> <nil>}
I0408 19:34:45.886018 810787 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-504925 && echo "embed-certs-504925" | sudo tee /etc/hostname
I0408 19:34:46.044553 810787 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-504925
I0408 19:34:46.044629 810787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-504925
I0408 19:34:46.063996 810787 main.go:141] libmachine: Using SSH client type: native
I0408 19:34:46.064313 810787 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 33809 <nil> <nil>}
I0408 19:34:46.064331 810787 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-504925' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-504925/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-504925' | sudo tee -a /etc/hosts;
fi
fi
I0408 19:34:46.198703 810787 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0408 19:34:46.198734 810787 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20604-581234/.minikube CaCertPath:/home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20604-581234/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20604-581234/.minikube}
I0408 19:34:46.198765 810787 ubuntu.go:177] setting up certificates
I0408 19:34:46.198776 810787 provision.go:84] configureAuth start
I0408 19:34:46.198851 810787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-504925
I0408 19:34:46.216692 810787 provision.go:143] copyHostCerts
I0408 19:34:46.216759 810787 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-581234/.minikube/ca.pem, removing ...
I0408 19:34:46.216815 810787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-581234/.minikube/ca.pem
I0408 19:34:46.216954 810787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20604-581234/.minikube/ca.pem (1078 bytes)
I0408 19:34:46.217143 810787 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-581234/.minikube/cert.pem, removing ...
I0408 19:34:46.217157 810787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-581234/.minikube/cert.pem
I0408 19:34:46.217198 810787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20604-581234/.minikube/cert.pem (1123 bytes)
I0408 19:34:46.217274 810787 exec_runner.go:144] found /home/jenkins/minikube-integration/20604-581234/.minikube/key.pem, removing ...
I0408 19:34:46.217290 810787 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20604-581234/.minikube/key.pem
I0408 19:34:46.217319 810787 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20604-581234/.minikube/key.pem (1679 bytes)
I0408 19:34:46.217378 810787 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20604-581234/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca-key.pem org=jenkins.embed-certs-504925 san=[127.0.0.1 192.168.85.2 embed-certs-504925 localhost minikube]
I0408 19:34:46.329148 810787 provision.go:177] copyRemoteCerts
I0408 19:34:46.329217 810787 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0408 19:34:46.329270 810787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-504925
I0408 19:34:46.348458 810787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/embed-certs-504925/id_rsa Username:docker}
I0408 19:34:46.440706 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0408 19:34:46.467301 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0408 19:34:46.492899 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0408 19:34:46.518414 810787 provision.go:87] duration metric: took 319.624026ms to configureAuth
I0408 19:34:46.518525 810787 ubuntu.go:193] setting minikube options for container-runtime
I0408 19:34:46.518791 810787 config.go:182] Loaded profile config "embed-certs-504925": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.2
I0408 19:34:46.518808 810787 machine.go:96] duration metric: took 830.359953ms to provisionDockerMachine
I0408 19:34:46.518817 810787 client.go:171] duration metric: took 7.900412107s to LocalClient.Create
I0408 19:34:46.518858 810787 start.go:167] duration metric: took 7.900498851s to libmachine.API.Create "embed-certs-504925"
I0408 19:34:46.518870 810787 start.go:293] postStartSetup for "embed-certs-504925" (driver="docker")
I0408 19:34:46.518880 810787 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0408 19:34:46.518945 810787 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0408 19:34:46.519001 810787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-504925
I0408 19:34:46.536300 810787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/embed-certs-504925/id_rsa Username:docker}
I0408 19:34:46.628199 810787 ssh_runner.go:195] Run: cat /etc/os-release
I0408 19:34:46.631667 810787 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0408 19:34:46.631709 810787 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0408 19:34:46.631721 810787 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0408 19:34:46.631728 810787 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0408 19:34:46.631738 810787 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-581234/.minikube/addons for local assets ...
I0408 19:34:46.631798 810787 filesync.go:126] Scanning /home/jenkins/minikube-integration/20604-581234/.minikube/files for local assets ...
I0408 19:34:46.631887 810787 filesync.go:149] local asset: /home/jenkins/minikube-integration/20604-581234/.minikube/files/etc/ssl/certs/5866092.pem -> 5866092.pem in /etc/ssl/certs
I0408 19:34:46.631992 810787 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0408 19:34:46.640746 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/files/etc/ssl/certs/5866092.pem --> /etc/ssl/certs/5866092.pem (1708 bytes)
I0408 19:34:46.666320 810787 start.go:296] duration metric: took 147.4352ms for postStartSetup
I0408 19:34:46.666794 810787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-504925
I0408 19:34:46.686623 810787 profile.go:143] Saving config to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/config.json ...
I0408 19:34:46.686919 810787 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0408 19:34:46.686980 810787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-504925
I0408 19:34:46.707942 810787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/embed-certs-504925/id_rsa Username:docker}
I0408 19:34:46.799685 810787 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0408 19:34:46.804644 810787 start.go:128] duration metric: took 8.189886438s to createHost
I0408 19:34:46.804670 810787 start.go:83] releasing machines lock for "embed-certs-504925", held for 8.190032284s
I0408 19:34:46.804750 810787 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-504925
I0408 19:34:46.822169 810787 ssh_runner.go:195] Run: cat /version.json
I0408 19:34:46.822180 810787 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0408 19:34:46.822248 810787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-504925
I0408 19:34:46.822292 810787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-504925
I0408 19:34:46.851601 810787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/embed-certs-504925/id_rsa Username:docker}
I0408 19:34:46.852442 810787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33809 SSHKeyPath:/home/jenkins/minikube-integration/20604-581234/.minikube/machines/embed-certs-504925/id_rsa Username:docker}
I0408 19:34:46.938166 810787 ssh_runner.go:195] Run: systemctl --version
I0408 19:34:47.081967 810787 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0408 19:34:47.086850 810787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0408 19:34:47.115294 810787 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0408 19:34:47.115386 810787 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0408 19:34:47.148550 810787 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0408 19:34:47.148573 810787 start.go:495] detecting cgroup driver to use...
I0408 19:34:47.148607 810787 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0408 19:34:47.148676 810787 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0408 19:34:47.163334 810787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0408 19:34:47.175568 810787 docker.go:217] disabling cri-docker service (if available) ...
I0408 19:34:47.175691 810787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0408 19:34:47.192120 810787 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0408 19:34:47.208032 810787 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0408 19:34:47.307086 810787 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0408 19:34:47.414814 810787 docker.go:233] disabling docker service ...
I0408 19:34:47.414891 810787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0408 19:34:47.439573 810787 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0408 19:34:47.451779 810787 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0408 19:34:47.553395 810787 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0408 19:34:47.659751 810787 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0408 19:34:47.673502 810787 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0408 19:34:47.692977 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0408 19:34:47.703701 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0408 19:34:47.714208 810787 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0408 19:34:47.714319 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0408 19:34:47.725212 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0408 19:34:47.736385 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0408 19:34:47.746558 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0408 19:34:47.756948 810787 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0408 19:34:47.767140 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0408 19:34:47.777125 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0408 19:34:47.789835 810787 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0408 19:34:47.800826 810787 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0408 19:34:47.810010 810787 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0408 19:34:47.818753 810787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 19:34:47.910164 810787 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0408 19:34:48.041796 810787 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0408 19:34:48.041867 810787 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0408 19:34:48.045696 810787 start.go:563] Will wait 60s for crictl version
I0408 19:34:48.045820 810787 ssh_runner.go:195] Run: which crictl
I0408 19:34:48.049446 810787 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0408 19:34:48.087486 810787 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.27
RuntimeApiVersion: v1
I0408 19:34:48.087560 810787 ssh_runner.go:195] Run: containerd --version
I0408 19:34:48.110289 810787 ssh_runner.go:195] Run: containerd --version
I0408 19:34:48.140869 810787 out.go:177] * Preparing Kubernetes v1.32.2 on containerd 1.7.27 ...
I0408 19:34:48.143883 810787 cli_runner.go:164] Run: docker network inspect embed-certs-504925 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0408 19:34:48.161970 810787 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0408 19:34:48.165991 810787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0408 19:34:48.177248 810787 kubeadm.go:883] updating cluster {Name:embed-certs-504925 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-504925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0408 19:34:48.177361 810787 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime containerd
I0408 19:34:48.177419 810787 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 19:34:48.213538 810787 containerd.go:627] all images are preloaded for containerd runtime.
I0408 19:34:48.213563 810787 containerd.go:534] Images already preloaded, skipping extraction
I0408 19:34:48.213620 810787 ssh_runner.go:195] Run: sudo crictl images --output json
I0408 19:34:48.249977 810787 containerd.go:627] all images are preloaded for containerd runtime.
I0408 19:34:48.250001 810787 cache_images.go:84] Images are preloaded, skipping loading
I0408 19:34:48.250009 810787 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 containerd true true} ...
I0408 19:34:48.250136 810787 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-504925 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-504925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0408 19:34:48.250227 810787 ssh_runner.go:195] Run: sudo crictl info
I0408 19:34:48.285964 810787 cni.go:84] Creating CNI manager for ""
I0408 19:34:48.285989 810787 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0408 19:34:48.285999 810787 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0408 19:34:48.286020 810787 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-504925 NodeName:embed-certs-504925 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0408 19:34:48.286133 810787 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-504925"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0408 19:34:48.286205 810787 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0408 19:34:48.295117 810787 binaries.go:44] Found k8s binaries, skipping transfer
I0408 19:34:48.295207 810787 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0408 19:34:48.304037 810787 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0408 19:34:48.329852 810787 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0408 19:34:48.350161 810787 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0408 19:34:48.370777 810787 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0408 19:34:48.374792 810787 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0408 19:34:48.387703 810787 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0408 19:34:48.479826 810787 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0408 19:34:48.495076 810787 certs.go:68] Setting up /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925 for IP: 192.168.85.2
I0408 19:34:48.495103 810787 certs.go:194] generating shared ca certs ...
I0408 19:34:48.495145 810787 certs.go:226] acquiring lock for ca certs: {Name:mkbcf8d523d57729eb1fc091129687c3aa71d028 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:34:48.495333 810787 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20604-581234/.minikube/ca.key
I0408 19:34:48.495412 810787 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20604-581234/.minikube/proxy-client-ca.key
I0408 19:34:48.495427 810787 certs.go:256] generating profile certs ...
I0408 19:34:48.495505 810787 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/client.key
I0408 19:34:48.495532 810787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/client.crt with IP's: []
I0408 19:34:49.425630 810787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/client.crt ...
I0408 19:34:49.425708 810787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/client.crt: {Name:mk2fbcb68fb3c589a397607ca450a0e7740c4cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:34:49.426600 810787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/client.key ...
I0408 19:34:49.426666 810787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/client.key: {Name:mk9d7f7c38fb0ee6c212d5ff263a5fa6f3ba04fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:34:49.426817 810787 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.key.2dbdc295
I0408 19:34:49.426870 810787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.crt.2dbdc295 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0408 19:34:49.885678 810787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.crt.2dbdc295 ...
I0408 19:34:49.885746 810787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.crt.2dbdc295: {Name:mkb1bdcdd301eeec0b5ed77476cb8f6df42073cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:34:49.886592 810787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.key.2dbdc295 ...
I0408 19:34:49.886618 810787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.key.2dbdc295: {Name:mk1a9cd443ecbe01c9fbf50d292492132cf84aed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:34:49.887321 810787 certs.go:381] copying /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.crt.2dbdc295 -> /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.crt
I0408 19:34:49.887418 810787 certs.go:385] copying /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.key.2dbdc295 -> /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.key
I0408 19:34:49.887473 810787 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/proxy-client.key
I0408 19:34:49.887488 810787 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/proxy-client.crt with IP's: []
I0408 19:34:50.320809 810787 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/proxy-client.crt ...
I0408 19:34:50.320876 810787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/proxy-client.crt: {Name:mk99c5ee8a1bcc697427acb451fa5f6dbbcc8bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:34:50.321071 810787 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/proxy-client.key ...
I0408 19:34:50.321113 810787 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/proxy-client.key: {Name:mk518854035d7f13079d3b5a648ca18c5f0329fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0408 19:34:50.321362 810787 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/586609.pem (1338 bytes)
W0408 19:34:50.321435 810787 certs.go:480] ignoring /home/jenkins/minikube-integration/20604-581234/.minikube/certs/586609_empty.pem, impossibly tiny 0 bytes
I0408 19:34:50.321461 810787 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca-key.pem (1675 bytes)
I0408 19:34:50.321510 810787 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/ca.pem (1078 bytes)
I0408 19:34:50.321564 810787 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/cert.pem (1123 bytes)
I0408 19:34:50.321612 810787 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/certs/key.pem (1679 bytes)
I0408 19:34:50.321692 810787 certs.go:484] found cert: /home/jenkins/minikube-integration/20604-581234/.minikube/files/etc/ssl/certs/5866092.pem (1708 bytes)
I0408 19:34:50.322282 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0408 19:34:50.355266 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0408 19:34:50.392718 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0408 19:34:50.439812 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0408 19:34:50.465063 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0408 19:34:50.506169 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0408 19:34:50.535944 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0408 19:34:50.582389 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/profiles/embed-certs-504925/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0408 19:34:50.611513 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/files/etc/ssl/certs/5866092.pem --> /usr/share/ca-certificates/5866092.pem (1708 bytes)
I0408 19:34:50.640827 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0408 19:34:50.670861 810787 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20604-581234/.minikube/certs/586609.pem --> /usr/share/ca-certificates/586609.pem (1338 bytes)
I0408 19:34:50.712521 810787 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0408 19:34:50.733215 810787 ssh_runner.go:195] Run: openssl version
I0408 19:34:50.739446 810787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586609.pem && ln -fs /usr/share/ca-certificates/586609.pem /etc/ssl/certs/586609.pem"
I0408 19:34:50.750575 810787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586609.pem
I0408 19:34:50.765008 810787 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 8 18:49 /usr/share/ca-certificates/586609.pem
I0408 19:34:50.765074 810787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586609.pem
I0408 19:34:50.773458 810787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586609.pem /etc/ssl/certs/51391683.0"
I0408 19:34:50.783936 810787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5866092.pem && ln -fs /usr/share/ca-certificates/5866092.pem /etc/ssl/certs/5866092.pem"
I0408 19:34:50.795685 810787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5866092.pem
I0408 19:34:50.799827 810787 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 8 18:49 /usr/share/ca-certificates/5866092.pem
I0408 19:34:50.799899 810787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5866092.pem
I0408 19:34:50.807556 810787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5866092.pem /etc/ssl/certs/3ec20f2e.0"
I0408 19:34:50.817009 810787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0408 19:34:50.826164 810787 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0408 19:34:50.830087 810787 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 8 18:41 /usr/share/ca-certificates/minikubeCA.pem
I0408 19:34:50.830156 810787 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0408 19:34:50.837770 810787 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0408 19:34:50.848908 810787 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0408 19:34:50.852877 810787 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0408 19:34:50.852931 810787 kubeadm.go:392] StartCluster: {Name:embed-certs-504925 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1744107393-20604@sha256:2430533582a8c08f907b2d5976c79bd2e672b4f3d4484088c99b839f3175ed6a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-504925 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0408 19:34:50.853005 810787 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0408 19:34:50.853072 810787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0408 19:34:50.926637 810787 cri.go:89] found id: ""
I0408 19:34:50.926721 810787 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0408 19:34:50.949709 810787 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0408 19:34:50.969730 810787 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0408 19:34:50.969797 810787 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0408 19:34:50.991593 810787 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0408 19:34:50.991612 810787 kubeadm.go:157] found existing configuration files:
I0408 19:34:50.991667 810787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0408 19:34:51.015454 810787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0408 19:34:51.015535 810787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0408 19:34:51.033368 810787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0408 19:34:51.045222 810787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0408 19:34:51.045292 810787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0408 19:34:51.057793 810787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0408 19:34:51.072519 810787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0408 19:34:51.072587 810787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0408 19:34:51.085663 810787 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0408 19:34:51.099292 810787 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0408 19:34:51.099359 810787 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0408 19:34:51.109541 810787 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0408 19:34:51.160619 810787 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
I0408 19:34:51.160792 810787 kubeadm.go:310] [preflight] Running pre-flight checks
I0408 19:34:51.197778 810787 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0408 19:34:51.197852 810787 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1081-aws[0m
I0408 19:34:51.197892 810787 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0408 19:34:51.197942 810787 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0408 19:34:51.197994 810787 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0408 19:34:51.198046 810787 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0408 19:34:51.198098 810787 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0408 19:34:51.198149 810787 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0408 19:34:51.198210 810787 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0408 19:34:51.198260 810787 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0408 19:34:51.198313 810787 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0408 19:34:51.198363 810787 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0408 19:34:51.294918 810787 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0408 19:34:51.295032 810787 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0408 19:34:51.295130 810787 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0408 19:34:51.306882 810787 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0408 19:34:49.859697 800094 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0408 19:34:49.876565 800094 api_server.go:72] duration metric: took 5m59.255265736s to wait for apiserver process to appear ...
I0408 19:34:49.876598 800094 api_server.go:88] waiting for apiserver healthz status ...
I0408 19:34:49.876632 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0408 19:34:49.876718 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0408 19:34:49.939820 800094 cri.go:89] found id: "301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39"
I0408 19:34:49.939851 800094 cri.go:89] found id: "c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b"
I0408 19:34:49.939856 800094 cri.go:89] found id: ""
I0408 19:34:49.939864 800094 logs.go:282] 2 containers: [301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39 c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b]
I0408 19:34:49.939947 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:49.947167 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:49.954567 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0408 19:34:49.954661 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0408 19:34:50.025717 800094 cri.go:89] found id: "9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a"
I0408 19:34:50.025743 800094 cri.go:89] found id: "8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:34:50.025749 800094 cri.go:89] found id: ""
I0408 19:34:50.025756 800094 logs.go:282] 2 containers: [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a 8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca]
I0408 19:34:50.025822 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.035367 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.041970 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0408 19:34:50.042047 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0408 19:34:50.122321 800094 cri.go:89] found id: "c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439"
I0408 19:34:50.122342 800094 cri.go:89] found id: "a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:34:50.122347 800094 cri.go:89] found id: ""
I0408 19:34:50.122354 800094 logs.go:282] 2 containers: [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439 a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d]
I0408 19:34:50.122413 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.126523 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.141640 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0408 19:34:50.141714 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0408 19:34:50.247771 800094 cri.go:89] found id: "1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376"
I0408 19:34:50.247790 800094 cri.go:89] found id: "e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:34:50.247795 800094 cri.go:89] found id: ""
I0408 19:34:50.247803 800094 logs.go:282] 2 containers: [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376 e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0]
I0408 19:34:50.247859 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.251752 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.255498 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0408 19:34:50.255559 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0408 19:34:50.302738 800094 cri.go:89] found id: "b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2"
I0408 19:34:50.302758 800094 cri.go:89] found id: "866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:34:50.302763 800094 cri.go:89] found id: ""
I0408 19:34:50.302770 800094 logs.go:282] 2 containers: [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2 866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e]
I0408 19:34:50.302827 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.313030 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.316748 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0408 19:34:50.316818 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0408 19:34:50.378626 800094 cri.go:89] found id: "a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7"
I0408 19:34:50.378657 800094 cri.go:89] found id: "38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:34:50.378663 800094 cri.go:89] found id: ""
I0408 19:34:50.378670 800094 logs.go:282] 2 containers: [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7 38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6]
I0408 19:34:50.378728 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.399304 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.403639 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0408 19:34:50.403711 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0408 19:34:50.488910 800094 cri.go:89] found id: "f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98"
I0408 19:34:50.488929 800094 cri.go:89] found id: "f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:34:50.488933 800094 cri.go:89] found id: ""
I0408 19:34:50.488940 800094 logs.go:282] 2 containers: [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98 f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c]
I0408 19:34:50.489012 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.497668 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.510988 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0408 19:34:50.511062 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0408 19:34:50.572505 800094 cri.go:89] found id: "ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc"
I0408 19:34:50.572525 800094 cri.go:89] found id: ""
I0408 19:34:50.572533 800094 logs.go:282] 1 containers: [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc]
I0408 19:34:50.572589 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.576551 800094 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0408 19:34:50.576622 800094 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0408 19:34:50.632631 800094 cri.go:89] found id: "4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548"
I0408 19:34:50.632649 800094 cri.go:89] found id: "0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3"
I0408 19:34:50.632654 800094 cri.go:89] found id: ""
I0408 19:34:50.632662 800094 logs.go:282] 2 containers: [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548 0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3]
I0408 19:34:50.632716 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.636840 800094 ssh_runner.go:195] Run: which crictl
I0408 19:34:50.644220 800094 logs.go:123] Gathering logs for kubelet ...
I0408 19:34:50.644252 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0408 19:34:50.705436 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.448599 660 reflector.go:138] object-"kube-system"/"kube-proxy-token-hrsw6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-hrsw6" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.705683 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.454791 660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-62qzt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-62qzt" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.709331 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566273 660 reflector.go:138] object-"default"/"default-token-9t7wk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-9t7wk" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.709547 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566209 660 reflector.go:138] object-"kube-system"/"kindnet-token-f74cf": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-f74cf" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.709751 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566367 660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.709962 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566456 660 reflector.go:138] object-"kube-system"/"coredns-token-tjnm4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tjnm4" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.710163 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566571 660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.710609 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:07 old-k8s-version-789808 kubelet[660]: E0408 19:29:07.566704 660 reflector.go:138] object-"kube-system"/"metrics-server-token-ntl9w": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-ntl9w" is forbidden: User "system:node:old-k8s-version-789808" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-789808' and this object
W0408 19:34:50.720297 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:11 old-k8s-version-789808 kubelet[660]: E0408 19:29:11.608088 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.720564 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:12 old-k8s-version-789808 kubelet[660]: E0408 19:29:12.520827 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.724426 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:25 old-k8s-version-789808 kubelet[660]: E0408 19:29:25.314436 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.726461 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:36 old-k8s-version-789808 kubelet[660]: E0408 19:29:36.623687 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.727070 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:37 old-k8s-version-789808 kubelet[660]: E0408 19:29:37.628270 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.728686 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:39 old-k8s-version-789808 kubelet[660]: E0408 19:29:39.635668 660 pod_workers.go:191] Error syncing pod 8ea68ede-5c89-4238-b5e4-9811e9a34fc4 ("storage-provisioner_kube-system(8ea68ede-5c89-4238-b5e4-9811e9a34fc4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(8ea68ede-5c89-4238-b5e4-9811e9a34fc4)"
W0408 19:34:50.729084 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:39 old-k8s-version-789808 kubelet[660]: E0408 19:29:39.706714 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.729299 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:40 old-k8s-version-789808 kubelet[660]: E0408 19:29:40.304587 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.733634 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:54 old-k8s-version-789808 kubelet[660]: E0408 19:29:54.338620 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.734496 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:55 old-k8s-version-789808 kubelet[660]: E0408 19:29:55.704770 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.734883 800094 logs.go:138] Found kubelet problem: Apr 08 19:29:59 old-k8s-version-789808 kubelet[660]: E0408 19:29:59.707332 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.735097 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:09 old-k8s-version-789808 kubelet[660]: E0408 19:30:09.304394 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.735453 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:15 old-k8s-version-789808 kubelet[660]: E0408 19:30:15.304089 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.735665 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:20 old-k8s-version-789808 kubelet[660]: E0408 19:30:20.304595 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.736288 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:26 old-k8s-version-789808 kubelet[660]: E0408 19:30:26.799457 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.736651 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:29 old-k8s-version-789808 kubelet[660]: E0408 19:30:29.706613 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.736867 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:31 old-k8s-version-789808 kubelet[660]: E0408 19:30:31.304384 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.737287 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:40 old-k8s-version-789808 kubelet[660]: E0408 19:30:40.304513 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.741676 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:44 old-k8s-version-789808 kubelet[660]: E0408 19:30:44.317089 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.742068 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:51 old-k8s-version-789808 kubelet[660]: E0408 19:30:51.304236 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.742291 800094 logs.go:138] Found kubelet problem: Apr 08 19:30:59 old-k8s-version-789808 kubelet[660]: E0408 19:30:59.309103 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.742714 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:03 old-k8s-version-789808 kubelet[660]: E0408 19:31:03.304045 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.742928 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:12 old-k8s-version-789808 kubelet[660]: E0408 19:31:12.304580 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.743542 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:18 old-k8s-version-789808 kubelet[660]: E0408 19:31:18.945002 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.743897 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:19 old-k8s-version-789808 kubelet[660]: E0408 19:31:19.949074 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.744114 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:26 old-k8s-version-789808 kubelet[660]: E0408 19:31:26.304537 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.744466 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:32 old-k8s-version-789808 kubelet[660]: E0408 19:31:32.304493 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.744676 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:37 old-k8s-version-789808 kubelet[660]: E0408 19:31:37.304399 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.745031 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:43 old-k8s-version-789808 kubelet[660]: E0408 19:31:43.304105 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.745239 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:49 old-k8s-version-789808 kubelet[660]: E0408 19:31:49.304415 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.745591 800094 logs.go:138] Found kubelet problem: Apr 08 19:31:58 old-k8s-version-789808 kubelet[660]: E0408 19:31:58.313363 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.745800 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:00 old-k8s-version-789808 kubelet[660]: E0408 19:32:00.315122 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.746155 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:10 old-k8s-version-789808 kubelet[660]: E0408 19:32:10.304077 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.748636 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:13 old-k8s-version-789808 kubelet[660]: E0408 19:32:13.313444 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0408 19:34:50.748991 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:25 old-k8s-version-789808 kubelet[660]: E0408 19:32:25.304097 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.749202 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:28 old-k8s-version-789808 kubelet[660]: E0408 19:32:28.305180 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.749569 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:37 old-k8s-version-789808 kubelet[660]: E0408 19:32:37.304121 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.749793 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:39 old-k8s-version-789808 kubelet[660]: E0408 19:32:39.304389 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.750864 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:51 old-k8s-version-789808 kubelet[660]: E0408 19:32:51.305084 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.751336 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:52 old-k8s-version-789808 kubelet[660]: E0408 19:32:52.228011 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.751665 800094 logs.go:138] Found kubelet problem: Apr 08 19:32:59 old-k8s-version-789808 kubelet[660]: E0408 19:32:59.707003 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.751853 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:03 old-k8s-version-789808 kubelet[660]: E0408 19:33:03.304676 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.752181 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:12 old-k8s-version-789808 kubelet[660]: E0408 19:33:12.304502 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.752367 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:17 old-k8s-version-789808 kubelet[660]: E0408 19:33:17.304317 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.752693 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:23 old-k8s-version-789808 kubelet[660]: E0408 19:33:23.304147 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.752877 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:28 old-k8s-version-789808 kubelet[660]: E0408 19:33:28.306566 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.753204 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:36 old-k8s-version-789808 kubelet[660]: E0408 19:33:36.304754 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.753387 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:42 old-k8s-version-789808 kubelet[660]: E0408 19:33:42.305165 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.753714 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:51 old-k8s-version-789808 kubelet[660]: E0408 19:33:51.304494 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.753899 800094 logs.go:138] Found kubelet problem: Apr 08 19:33:55 old-k8s-version-789808 kubelet[660]: E0408 19:33:55.304450 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.754225 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:05 old-k8s-version-789808 kubelet[660]: E0408 19:34:05.304139 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.754410 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:10 old-k8s-version-789808 kubelet[660]: E0408 19:34:10.304418 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.754760 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.754946 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.755274 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:50.755458 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:50.755783 800094 logs.go:138] Found kubelet problem: Apr 08 19:34:45 old-k8s-version-789808 kubelet[660]: E0408 19:34:45.305341 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
I0408 19:34:50.755794 800094 logs.go:123] Gathering logs for describe nodes ...
I0408 19:34:50.755808 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0408 19:34:51.020353 800094 logs.go:123] Gathering logs for kube-scheduler [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376] ...
I0408 19:34:51.020407 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376"
I0408 19:34:51.106112 800094 logs.go:123] Gathering logs for kube-scheduler [e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0] ...
I0408 19:34:51.106196 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0"
I0408 19:34:51.175226 800094 logs.go:123] Gathering logs for kube-controller-manager [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7] ...
I0408 19:34:51.175301 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7"
I0408 19:34:51.269888 800094 logs.go:123] Gathering logs for storage-provisioner [0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3] ...
I0408 19:34:51.269921 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3"
I0408 19:34:51.329644 800094 logs.go:123] Gathering logs for container status ...
I0408 19:34:51.329674 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0408 19:34:51.394734 800094 logs.go:123] Gathering logs for dmesg ...
I0408 19:34:51.394765 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0408 19:34:51.413970 800094 logs.go:123] Gathering logs for kube-apiserver [c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b] ...
I0408 19:34:51.414006 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b"
I0408 19:34:51.479198 800094 logs.go:123] Gathering logs for coredns [a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d] ...
I0408 19:34:51.479234 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d"
I0408 19:34:51.524419 800094 logs.go:123] Gathering logs for kube-controller-manager [38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6] ...
I0408 19:34:51.524450 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6"
I0408 19:34:51.605394 800094 logs.go:123] Gathering logs for kube-apiserver [301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39] ...
I0408 19:34:51.605435 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39"
I0408 19:34:51.700064 800094 logs.go:123] Gathering logs for etcd [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a] ...
I0408 19:34:51.700099 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a"
I0408 19:34:51.757188 800094 logs.go:123] Gathering logs for kube-proxy [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2] ...
I0408 19:34:51.757221 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2"
I0408 19:34:51.822372 800094 logs.go:123] Gathering logs for kindnet [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98] ...
I0408 19:34:51.822407 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98"
I0408 19:34:51.896835 800094 logs.go:123] Gathering logs for kubernetes-dashboard [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc] ...
I0408 19:34:51.896875 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc"
I0408 19:34:51.953497 800094 logs.go:123] Gathering logs for containerd ...
I0408 19:34:51.953532 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0408 19:34:52.016339 800094 logs.go:123] Gathering logs for etcd [8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca] ...
I0408 19:34:52.016381 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca"
I0408 19:34:52.070934 800094 logs.go:123] Gathering logs for coredns [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439] ...
I0408 19:34:52.070965 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439"
I0408 19:34:52.117207 800094 logs.go:123] Gathering logs for kube-proxy [866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e] ...
I0408 19:34:52.117242 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e"
I0408 19:34:52.175407 800094 logs.go:123] Gathering logs for kindnet [f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c] ...
I0408 19:34:52.175443 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c"
I0408 19:34:52.228175 800094 logs.go:123] Gathering logs for storage-provisioner [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548] ...
I0408 19:34:52.228205 800094 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548"
I0408 19:34:52.277406 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:52.277431 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0408 19:34:52.277480 800094 out.go:270] X Problems detected in kubelet:
W0408 19:34:52.277495 800094 out.go:270] Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:52.277500 800094 out.go:270] Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:52.277520 800094 out.go:270] Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
W0408 19:34:52.277526 800094 out.go:270] Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0408 19:34:52.277538 800094 out.go:270] Apr 08 19:34:45 old-k8s-version-789808 kubelet[660]: E0408 19:34:45.305341 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
I0408 19:34:52.277543 800094 out.go:358] Setting ErrFile to fd 2...
I0408 19:34:52.277548 800094 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0408 19:34:51.312906 810787 out.go:235] - Generating certificates and keys ...
I0408 19:34:51.313011 810787 kubeadm.go:310] [certs] Using existing ca certificate authority
I0408 19:34:51.313081 810787 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0408 19:34:51.543485 810787 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0408 19:34:52.559815 810787 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0408 19:34:53.355490 810787 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0408 19:34:54.505242 810787 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0408 19:34:54.685252 810787 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0408 19:34:54.685621 810787 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-504925 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0408 19:34:55.209205 810787 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0408 19:34:55.209544 810787 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-504925 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0408 19:34:55.541584 810787 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0408 19:34:56.067444 810787 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0408 19:34:56.914775 810787 kubeadm.go:310] [certs] Generating "sa" key and public key
I0408 19:34:56.915061 810787 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0408 19:34:57.587216 810787 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0408 19:34:57.775328 810787 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0408 19:34:59.840364 810787 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0408 19:35:00.510095 810787 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0408 19:35:00.714560 810787 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0408 19:35:00.715334 810787 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0408 19:35:00.718531 810787 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0408 19:35:02.279239 800094 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0408 19:35:02.292674 800094 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0408 19:35:02.297750 800094 out.go:201]
W0408 19:35:02.300964 800094 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0408 19:35:02.301075 800094 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0408 19:35:02.301137 800094 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0408 19:35:02.301175 800094 out.go:270] *
W0408 19:35:02.302335 800094 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0408 19:35:02.307332 800094 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
dd56f3df85a85 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 a8d2475930bcb dashboard-metrics-scraper-8d5bb5db8-4w255
4e6196caf60a4 ba04bb24b9575 5 minutes ago Running storage-provisioner 3 61fc4765afba2 storage-provisioner
ecc276cdd5867 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 f212332f4382a kubernetes-dashboard-cd95d586-dh8qn
f647e803638fb ee75e27fff91c 5 minutes ago Running kindnet-cni 1 dc4c47f5295dc kindnet-lbsv8
c17c6adca5591 db91994f4ee8f 5 minutes ago Running coredns 1 63105a6f64580 coredns-74ff55c5b-pcfpp
b9c860dce17ec 25a5233254979 5 minutes ago Running kube-proxy 1 ab8940f31b1eb kube-proxy-n8gzl
2c69555c4619d 1611cd07b61d5 5 minutes ago Running busybox 1 2ae1b169ae68f busybox
0b5acbcf42ce4 ba04bb24b9575 5 minutes ago Exited storage-provisioner 2 61fc4765afba2 storage-provisioner
1af65166baab6 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 252c061d517cd kube-scheduler-old-k8s-version-789808
301b1b37dd9d5 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 5c4dcb983a0af kube-apiserver-old-k8s-version-789808
a5dcc2afe8064 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 585d47c02733b kube-controller-manager-old-k8s-version-789808
9ab9795162263 05b738aa1bc63 6 minutes ago Running etcd 1 3cba986a6379e etcd-old-k8s-version-789808
df2817b0786f2 1611cd07b61d5 6 minutes ago Exited busybox 0 829fcfdc37fd1 busybox
a25413744fa83 db91994f4ee8f 8 minutes ago Exited coredns 0 37d2ad0c881ad coredns-74ff55c5b-pcfpp
f256ca55c8351 ee75e27fff91c 8 minutes ago Exited kindnet-cni 0 060b322afcd70 kindnet-lbsv8
866582a26a106 25a5233254979 8 minutes ago Exited kube-proxy 0 522be7658dc2e kube-proxy-n8gzl
38ea2abc489bb 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 4fe5cb2ba6f5b kube-controller-manager-old-k8s-version-789808
c538a5170355a 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 f7382b587ddd5 kube-apiserver-old-k8s-version-789808
8d6baf01fff7f 05b738aa1bc63 8 minutes ago Exited etcd 0 8865f5d6df263 etcd-old-k8s-version-789808
e09be3c3b77a3 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 47af4494f8617 kube-scheduler-old-k8s-version-789808
==> containerd <==
Apr 08 19:30:44 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:30:44.315088954Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.307440461Z" level=info msg="CreateContainer within sandbox \"a8d2475930bcb5e11a2c581b3a213b9e09b1e549e273da63b6198b32ebd1d590\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.330015034Z" level=info msg="CreateContainer within sandbox \"a8d2475930bcb5e11a2c581b3a213b9e09b1e549e273da63b6198b32ebd1d590\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c\""
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.330894244Z" level=info msg="StartContainer for \"5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c\""
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.455277226Z" level=info msg="StartContainer for \"5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c\" returns successfully"
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.455597849Z" level=info msg="received exit event container_id:\"5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c\" id:\"5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c\" pid:3086 exit_status:255 exited_at:{seconds:1744140678 nanos:445775981}"
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.506360097Z" level=info msg="shim disconnected" id=5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c namespace=k8s.io
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.506407375Z" level=warning msg="cleaning up after shim disconnected" id=5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c namespace=k8s.io
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.506442460Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.951236043Z" level=info msg="RemoveContainer for \"a6f3a15640880c8549dd7d2af9fb6e2aa713a7239303c628c455b68b4f4fa337\""
Apr 08 19:31:18 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:31:18.960433379Z" level=info msg="RemoveContainer for \"a6f3a15640880c8549dd7d2af9fb6e2aa713a7239303c628c455b68b4f4fa337\" returns successfully"
Apr 08 19:32:13 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:13.304942131Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:32:13 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:13.310896844Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Apr 08 19:32:13 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:13.312971168Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 08 19:32:13 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:13.313077333Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Apr 08 19:32:51 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:51.306291400Z" level=info msg="CreateContainer within sandbox \"a8d2475930bcb5e11a2c581b3a213b9e09b1e549e273da63b6198b32ebd1d590\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Apr 08 19:32:51 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:51.325638989Z" level=info msg="CreateContainer within sandbox \"a8d2475930bcb5e11a2c581b3a213b9e09b1e549e273da63b6198b32ebd1d590\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e\""
Apr 08 19:32:51 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:51.326354138Z" level=info msg="StartContainer for \"dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e\""
Apr 08 19:32:51 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:51.401847974Z" level=info msg="StartContainer for \"dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e\" returns successfully"
Apr 08 19:32:51 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:51.405945462Z" level=info msg="received exit event container_id:\"dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e\" id:\"dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e\" pid:3362 exit_status:255 exited_at:{seconds:1744140771 nanos:405705102}"
Apr 08 19:32:51 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:51.430335808Z" level=info msg="shim disconnected" id=dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e namespace=k8s.io
Apr 08 19:32:51 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:51.430556993Z" level=warning msg="cleaning up after shim disconnected" id=dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e namespace=k8s.io
Apr 08 19:32:51 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:51.430676246Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Apr 08 19:32:52 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:52.229789866Z" level=info msg="RemoveContainer for \"5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c\""
Apr 08 19:32:52 old-k8s-version-789808 containerd[568]: time="2025-04-08T19:32:52.236090195Z" level=info msg="RemoveContainer for \"5b1b2fcaa84837e059c7b3da108ff266d2e12ff082ad06ed2b62d47b4ce67f1c\" returns successfully"
==> coredns [a25413744fa8373e696d7be393ff4745eb38e28566424b0aa2b80c05987a9a6d] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:34635 - 42954 "HINFO IN 7703531696457936096.3802788266762134794. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014916809s
==> coredns [c17c6adca559116c9b78fbec795e3b607571dd36c872304582b133521b72c439] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:52319 - 60773 "HINFO IN 7104672645590410123.6314663377110391770. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013190152s
==> describe nodes <==
Name: old-k8s-version-789808
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-789808
kubernetes.io/os=linux
minikube.k8s.io/commit=00fec7ad00298ce3ccd71a2d57a7f829f082fec8
minikube.k8s.io/name=old-k8s-version-789808
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_08T19_26_15_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 08 Apr 2025 19:26:11 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-789808
AcquireTime: <unset>
RenewTime: Tue, 08 Apr 2025 19:35:00 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 08 Apr 2025 19:30:07 +0000 Tue, 08 Apr 2025 19:26:05 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 08 Apr 2025 19:30:07 +0000 Tue, 08 Apr 2025 19:26:05 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 08 Apr 2025 19:30:07 +0000 Tue, 08 Apr 2025 19:26:05 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 08 Apr 2025 19:30:07 +0000 Tue, 08 Apr 2025 19:26:30 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-789808
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: a125268030d949319620ae17fdf427e3
System UUID: 592173b0-e2f5-40e9-882e-0a2bef28dbb6
Boot ID: c6b5228c-dba2-4b12-9c5b-98ca5b8c0774
Kernel Version: 5.15.0-1081-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.27
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m46s
kube-system coredns-74ff55c5b-pcfpp 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m34s
kube-system etcd-old-k8s-version-789808 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m41s
kube-system kindnet-lbsv8 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m34s
kube-system kube-apiserver-old-k8s-version-789808 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m41s
kube-system kube-controller-manager-old-k8s-version-789808 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m41s
kube-system kube-proxy-n8gzl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m34s
kube-system kube-scheduler-old-k8s-version-789808 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m41s
kube-system metrics-server-9975d5f86-jmllj 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m34s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m33s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-4w255 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m39s
kubernetes-dashboard kubernetes-dashboard-cd95d586-dh8qn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m39s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 9m1s (x4 over 9m1s) kubelet Node old-k8s-version-789808 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m1s (x3 over 9m1s) kubelet Node old-k8s-version-789808 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m1s (x4 over 9m1s) kubelet Node old-k8s-version-789808 status is now: NodeHasSufficientPID
Normal Starting 8m41s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m41s kubelet Node old-k8s-version-789808 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m41s kubelet Node old-k8s-version-789808 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m41s kubelet Node old-k8s-version-789808 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m41s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m34s kubelet Node old-k8s-version-789808 status is now: NodeReady
Normal Starting 8m32s kube-proxy Starting kube-proxy.
Normal Starting 6m6s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m6s (x8 over 6m6s) kubelet Node old-k8s-version-789808 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m6s (x8 over 6m6s) kubelet Node old-k8s-version-789808 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m6s (x7 over 6m6s) kubelet Node old-k8s-version-789808 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m6s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m54s kube-proxy Starting kube-proxy.
==> dmesg <==
[Apr 8 18:15] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
[ +0.710850] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
==> etcd [8d6baf01fff7fe705cb5b1e6fbf6daa63aa3f3cf81cf395c47ad1c76718c74ca] <==
raft2025/04/08 19:26:05 INFO: ea7e25599daad906 became candidate at term 2
raft2025/04/08 19:26:05 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/04/08 19:26:05 INFO: ea7e25599daad906 became leader at term 2
raft2025/04/08 19:26:05 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-04-08 19:26:05.165098 I | etcdserver: published {Name:old-k8s-version-789808 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-04-08 19:26:05.165321 I | embed: ready to serve client requests
2025-04-08 19:26:05.166908 I | embed: serving client requests on 127.0.0.1:2379
2025-04-08 19:26:05.167139 I | embed: ready to serve client requests
2025-04-08 19:26:05.171162 I | embed: serving client requests on 192.168.76.2:2379
2025-04-08 19:26:05.177843 I | etcdserver: setting up the initial cluster version to 3.4
2025-04-08 19:26:05.178424 N | etcdserver/membership: set the initial cluster version to 3.4
2025-04-08 19:26:05.178516 I | etcdserver/api: enabled capabilities for version 3.4
2025-04-08 19:26:29.593702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:26:34.570948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:26:44.570899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:26:54.570729 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:27:04.570927 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:27:14.570747 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:27:24.570817 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:27:34.570857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:27:44.571377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:27:54.570874 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:28:04.570777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:28:14.572172 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:28:24.571372 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [9ab9795162263c53b1e327b86967c79aab061ae3c1cb914cf9ff696ef884bc6a] <==
2025-04-08 19:30:58.383566 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:31:08.383618 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:31:18.384863 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:31:28.383357 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:31:38.383475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:31:48.383353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:31:58.383647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:32:08.383428 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:32:18.383723 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:32:28.383569 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:32:38.383627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:32:48.383556 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:32:58.383344 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:33:08.383493 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:33:18.383633 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:33:28.383405 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:33:38.383495 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:33:48.383523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:33:58.384184 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:34:08.383640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:34:18.383497 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:34:28.383758 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:34:38.383658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:34:48.383526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-08 19:34:58.383750 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
19:35:05 up 3:17, 0 users, load average: 3.11, 2.48, 2.63
Linux old-k8s-version-789808 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [f256ca55c8351a7dcebe07343d23a081f8928d414937ce89700c5c04a37a5c3c] <==
I0408 19:26:33.924087 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0408 19:26:33.924278 1 metrics.go:61] Registering metrics
I0408 19:26:33.924439 1 controller.go:401] Syncing nftables rules
I0408 19:26:43.723568 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:26:43.723632 1 main.go:301] handling current node
I0408 19:26:53.723810 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:26:53.723848 1 main.go:301] handling current node
I0408 19:27:03.723715 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:27:03.723750 1 main.go:301] handling current node
I0408 19:27:13.730688 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:27:13.730726 1 main.go:301] handling current node
I0408 19:27:23.732580 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:27:23.732616 1 main.go:301] handling current node
I0408 19:27:33.723942 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:27:33.723975 1 main.go:301] handling current node
I0408 19:27:43.722997 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:27:43.723046 1 main.go:301] handling current node
I0408 19:27:53.730412 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:27:53.730447 1 main.go:301] handling current node
I0408 19:28:03.730562 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:28:03.730686 1 main.go:301] handling current node
I0408 19:28:13.723950 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:28:13.723986 1 main.go:301] handling current node
I0408 19:28:23.723278 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:28:23.723312 1 main.go:301] handling current node
==> kindnet [f647e803638fbecd6e184469f62cbc0586f6ad2631b7be458c45972465a6de98] <==
I0408 19:33:02.225566 1 main.go:301] handling current node
I0408 19:33:12.223572 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:33:12.223813 1 main.go:301] handling current node
I0408 19:33:22.230674 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:33:22.230712 1 main.go:301] handling current node
I0408 19:33:32.230284 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:33:32.230320 1 main.go:301] handling current node
I0408 19:33:42.230208 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:33:42.230251 1 main.go:301] handling current node
I0408 19:33:52.229024 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:33:52.229062 1 main.go:301] handling current node
I0408 19:34:02.230581 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:34:02.230627 1 main.go:301] handling current node
I0408 19:34:12.224106 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:34:12.224401 1 main.go:301] handling current node
I0408 19:34:22.228780 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:34:22.228816 1 main.go:301] handling current node
I0408 19:34:32.232837 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:34:32.232977 1 main.go:301] handling current node
I0408 19:34:42.233056 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:34:42.233104 1 main.go:301] handling current node
I0408 19:34:52.228384 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:34:52.228424 1 main.go:301] handling current node
I0408 19:35:02.230562 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0408 19:35:02.230598 1 main.go:301] handling current node
==> kube-apiserver [301b1b37dd9d539ce16d4446ee1165088cbf75729c9b3579717fbfda503ecd39] <==
I0408 19:31:18.139292 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:31:18.139329 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0408 19:31:49.943360 1 client.go:360] parsed scheme: "passthrough"
I0408 19:31:49.943461 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:31:49.943491 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0408 19:32:10.165883 1 handler_proxy.go:102] no RequestInfo found in the context
E0408 19:32:10.166074 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0408 19:32:10.166129 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0408 19:32:29.268609 1 client.go:360] parsed scheme: "passthrough"
I0408 19:32:29.268665 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:32:29.268675 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0408 19:33:11.454297 1 client.go:360] parsed scheme: "passthrough"
I0408 19:33:11.454343 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:33:11.454352 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0408 19:33:47.249792 1 client.go:360] parsed scheme: "passthrough"
I0408 19:33:47.249836 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:33:47.249989 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0408 19:34:08.536677 1 handler_proxy.go:102] no RequestInfo found in the context
E0408 19:34:08.536874 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0408 19:34:08.536910 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0408 19:34:30.222461 1 client.go:360] parsed scheme: "passthrough"
I0408 19:34:30.222717 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:34:30.222860 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [c538a5170355a1e7cb67b7e9077a30ecd5d5dce6207b86e13abe124b4a275a4b] <==
I0408 19:26:12.518090 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0408 19:26:12.542318 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0408 19:26:12.547609 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0408 19:26:12.547634 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0408 19:26:13.043126 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0408 19:26:13.089528 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0408 19:26:13.233862 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0408 19:26:13.235132 1 controller.go:606] quota admission added evaluator for: endpoints
I0408 19:26:13.239229 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0408 19:26:14.167458 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0408 19:26:14.703278 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0408 19:26:14.794643 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0408 19:26:23.236645 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0408 19:26:30.147984 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0408 19:26:30.160325 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0408 19:26:50.465746 1 client.go:360] parsed scheme: "passthrough"
I0408 19:26:50.465791 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:26:50.465800 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0408 19:27:27.620279 1 client.go:360] parsed scheme: "passthrough"
I0408 19:27:27.620325 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:27:27.620334 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0408 19:28:06.323260 1 client.go:360] parsed scheme: "passthrough"
I0408 19:28:06.323309 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0408 19:28:06.323318 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0408 19:28:28.141793 1 upgradeaware.go:387] Error proxying data from backend to client: write tcp 192.168.76.2:8443->192.168.76.1:46242: write: broken pipe
==> kube-controller-manager [38ea2abc489bba94c661fc478eb628956a439a894c433e92691e86b81e00b6a6] <==
I0408 19:26:30.237029 1 shared_informer.go:247] Caches are synced for stateful set
I0408 19:26:30.240598 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lbsv8"
I0408 19:26:30.240637 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-n8gzl"
I0408 19:26:30.247710 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-789808" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I0408 19:26:30.283765 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2xbjd"
I0408 19:26:30.285601 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I0408 19:26:30.295077 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I0408 19:26:30.296908 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I0408 19:26:30.297077 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I0408 19:26:30.297060 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I0408 19:26:30.325615 1 shared_informer.go:247] Caches are synced for service account
I0408 19:26:30.338127 1 shared_informer.go:247] Caches are synced for resource quota
I0408 19:26:30.345756 1 shared_informer.go:247] Caches are synced for resource quota
I0408 19:26:30.358793 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-pcfpp"
I0408 19:26:30.412864 1 shared_informer.go:247] Caches are synced for namespace
I0408 19:26:30.498293 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0408 19:26:30.740553 1 shared_informer.go:247] Caches are synced for garbage collector
I0408 19:26:30.740573 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0408 19:26:30.799480 1 shared_informer.go:247] Caches are synced for garbage collector
I0408 19:26:32.048751 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0408 19:26:32.063245 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-2xbjd"
I0408 19:26:35.138692 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0408 19:28:29.119814 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0408 19:28:29.259421 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0408 19:28:29.302840 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
==> kube-controller-manager [a5dcc2afe8064b086017ce1c7b554f81f6481148fdf2960db519751b874740e7] <==
E0408 19:30:57.321845 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0408 19:31:02.926413 1 request.go:655] Throttling request took 1.048001579s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0408 19:31:03.777933 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0408 19:31:27.824070 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0408 19:31:35.428312 1 request.go:655] Throttling request took 1.048063314s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0408 19:31:36.279839 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0408 19:31:58.327317 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0408 19:32:07.930402 1 request.go:655] Throttling request took 1.048465216s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0408 19:32:08.781782 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0408 19:32:28.830725 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0408 19:32:40.432180 1 request.go:655] Throttling request took 1.048250596s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W0408 19:32:41.283663 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0408 19:32:59.374210 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0408 19:33:12.934289 1 request.go:655] Throttling request took 1.048556171s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1?timeout=32s
W0408 19:33:13.785809 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0408 19:33:29.876787 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0408 19:33:45.436403 1 request.go:655] Throttling request took 1.048410087s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0408 19:33:46.287888 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0408 19:34:00.414777 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0408 19:34:17.938305 1 request.go:655] Throttling request took 1.04827454s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0408 19:34:18.789769 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0408 19:34:30.916557 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0408 19:34:50.440144 1 request.go:655] Throttling request took 1.048127734s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0408 19:34:51.291987 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0408 19:35:01.418816 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-proxy [866582a26a1061c34f4ad707073d56157b032f4b444db297973abe7c75af4a2e] <==
I0408 19:26:32.584794 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0408 19:26:32.584878 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0408 19:26:32.624871 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0408 19:26:32.628839 1 server_others.go:185] Using iptables Proxier.
I0408 19:26:32.629117 1 server.go:650] Version: v1.20.0
I0408 19:26:32.636274 1 config.go:315] Starting service config controller
I0408 19:26:32.636297 1 shared_informer.go:240] Waiting for caches to sync for service config
I0408 19:26:32.637077 1 config.go:224] Starting endpoint slice config controller
I0408 19:26:32.637084 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0408 19:26:32.736427 1 shared_informer.go:247] Caches are synced for service config
I0408 19:26:32.737152 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [b9c860dce17ecb3e46ed0cbbf4b5093c77f0097e71cae223b2ab4549514a8dc2] <==
I0408 19:29:10.060642 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0408 19:29:10.060920 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0408 19:29:10.090810 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0408 19:29:10.090917 1 server_others.go:185] Using iptables Proxier.
I0408 19:29:10.091196 1 server.go:650] Version: v1.20.0
I0408 19:29:10.092670 1 config.go:315] Starting service config controller
I0408 19:29:10.092799 1 shared_informer.go:240] Waiting for caches to sync for service config
I0408 19:29:10.092970 1 config.go:224] Starting endpoint slice config controller
I0408 19:29:10.093062 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0408 19:29:10.193075 1 shared_informer.go:247] Caches are synced for service config
I0408 19:29:10.193236 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [1af65166baab65586d5ef0636470183559664a9435dfbd0012ea53d01ae35376] <==
I0408 19:29:03.042884 1 serving.go:331] Generated self-signed cert in-memory
W0408 19:29:07.335005 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0408 19:29:07.338806 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0408 19:29:07.338994 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0408 19:29:07.339084 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0408 19:29:07.658571 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0408 19:29:07.659436 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0408 19:29:07.659592 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0408 19:29:07.659697 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0408 19:29:07.773555 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [e09be3c3b77a3cd98dd0c353490bb508a9c32f1ab0d559e46e827e0f3346d9d0] <==
W0408 19:26:11.645860 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0408 19:26:11.646158 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0408 19:26:11.646282 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0408 19:26:11.646373 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0408 19:26:11.775611 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0408 19:26:11.781635 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0408 19:26:11.781668 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0408 19:26:11.781687 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0408 19:26:11.786825 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0408 19:26:11.787124 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0408 19:26:11.788344 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0408 19:26:11.788437 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0408 19:26:11.788510 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0408 19:26:11.788609 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0408 19:26:11.788674 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0408 19:26:11.788740 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0408 19:26:11.788803 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0408 19:26:11.788869 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0408 19:26:11.788935 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0408 19:26:11.791410 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0408 19:26:12.619095 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0408 19:26:12.678147 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0408 19:26:12.708425 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0408 19:26:12.810766 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0408 19:26:13.281717 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 08 19:33:12 old-k8s-version-789808 kubelet[660]: E0408 19:33:12.304502 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:33:17 old-k8s-version-789808 kubelet[660]: E0408 19:33:17.304317 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:33:23 old-k8s-version-789808 kubelet[660]: I0408 19:33:23.303794 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e
Apr 08 19:33:23 old-k8s-version-789808 kubelet[660]: E0408 19:33:23.304147 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:33:28 old-k8s-version-789808 kubelet[660]: E0408 19:33:28.306566 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:33:36 old-k8s-version-789808 kubelet[660]: I0408 19:33:36.303894 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e
Apr 08 19:33:36 old-k8s-version-789808 kubelet[660]: E0408 19:33:36.304754 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:33:42 old-k8s-version-789808 kubelet[660]: E0408 19:33:42.305165 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:33:51 old-k8s-version-789808 kubelet[660]: I0408 19:33:51.303716 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e
Apr 08 19:33:51 old-k8s-version-789808 kubelet[660]: E0408 19:33:51.304494 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:33:55 old-k8s-version-789808 kubelet[660]: E0408 19:33:55.304450 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:05 old-k8s-version-789808 kubelet[660]: I0408 19:34:05.303766 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e
Apr 08 19:34:05 old-k8s-version-789808 kubelet[660]: E0408 19:34:05.304139 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:10 old-k8s-version-789808 kubelet[660]: E0408 19:34:10.304418 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: I0408 19:34:18.304225 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e
Apr 08 19:34:18 old-k8s-version-789808 kubelet[660]: E0408 19:34:18.305055 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:25 old-k8s-version-789808 kubelet[660]: E0408 19:34:25.304373 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: I0408 19:34:30.303946 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e
Apr 08 19:34:30 old-k8s-version-789808 kubelet[660]: E0408 19:34:30.304913 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:37 old-k8s-version-789808 kubelet[660]: E0408 19:34:37.304870 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:45 old-k8s-version-789808 kubelet[660]: I0408 19:34:45.304526 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e
Apr 08 19:34:45 old-k8s-version-789808 kubelet[660]: E0408 19:34:45.305341 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
Apr 08 19:34:51 old-k8s-version-789808 kubelet[660]: E0408 19:34:51.304691 660 pod_workers.go:191] Error syncing pod bcc436e6-e1d8-4f2d-b3e8-00e7513db658 ("metrics-server-9975d5f86-jmllj_kube-system(bcc436e6-e1d8-4f2d-b3e8-00e7513db658)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 08 19:34:58 old-k8s-version-789808 kubelet[660]: I0408 19:34:58.310936 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: dd56f3df85a85d331c45cf5824018b5778513a7022729bbc4ed7503cf67ed17e
Apr 08 19:34:58 old-k8s-version-789808 kubelet[660]: E0408 19:34:58.311485 660 pod_workers.go:191] Error syncing pod bda890eb-3cbb-4626-9e3e-c3ff768730d5 ("dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-4w255_kubernetes-dashboard(bda890eb-3cbb-4626-9e3e-c3ff768730d5)"
==> kubernetes-dashboard [ecc276cdd5867e4650952acbdab4bf192dc6929667faeeb220ba08ed4a3b16fc] <==
2025/04/08 19:29:30 Using namespace: kubernetes-dashboard
2025/04/08 19:29:30 Using in-cluster config to connect to apiserver
2025/04/08 19:29:30 Using secret token for csrf signing
2025/04/08 19:29:30 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/04/08 19:29:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/04/08 19:29:30 Successful initial request to the apiserver, version: v1.20.0
2025/04/08 19:29:30 Generating JWE encryption key
2025/04/08 19:29:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/04/08 19:29:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/04/08 19:29:31 Initializing JWE encryption key from synchronized object
2025/04/08 19:29:31 Creating in-cluster Sidecar client
2025/04/08 19:29:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:29:31 Serving insecurely on HTTP port: 9090
2025/04/08 19:30:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:30:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:31:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:31:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:32:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:32:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:33:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:33:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:34:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:34:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:35:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/08 19:29:30 Starting overwatch
==> storage-provisioner [0b5acbcf42ce43a667566d57398b418a4eee569cecb5839e72c8d8cb883e5cf3] <==
I0408 19:29:09.213888 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0408 19:29:39.216466 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [4e6196caf60a44a79cedaecef0f230d20c5abe3d80676580036a299a6adb0548] <==
I0408 19:29:51.477667 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0408 19:29:51.530949 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0408 19:29:51.530997 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0408 19:30:09.025217 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0408 19:30:09.025560 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-789808_0a8b67e0-2e95-49d2-b64e-dfb8f4abc6ea!
I0408 19:30:09.026584 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34144c6c-3b12-4416-9394-74a7962039bc", APIVersion:"v1", ResourceVersion:"838", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-789808_0a8b67e0-2e95-49d2-b64e-dfb8f4abc6ea became leader
I0408 19:30:09.125870 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-789808_0a8b67e0-2e95-49d2-b64e-dfb8f4abc6ea!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-789808 -n old-k8s-version-789808
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-789808 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-jmllj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-789808 describe pod metrics-server-9975d5f86-jmllj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-789808 describe pod metrics-server-9975d5f86-jmllj: exit status 1 (194.554539ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-jmllj" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-789808 describe pod metrics-server-9975d5f86-jmllj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (384.87s)