=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-808561 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E0723 15:13:49.612467 3506898 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/addons-644741/client.crt: no such file or directory
E0723 15:13:52.219524 3506898 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/functional-697627/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-808561 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m13.393281703s)
-- stdout --
* [old-k8s-version-808561] minikube v1.33.1 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=19319
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19319-3501487/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3501487/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
* Using the docker driver based on existing profile
* Starting "old-k8s-version-808561" primary control-plane node in "old-k8s-version-808561" cluster
* Pulling base image v0.0.44-1721687125-19319 ...
* Restarting existing docker container for "old-k8s-version-808561" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-808561 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
-- /stdout --
** stderr **
I0723 15:13:49.195120 3714598 out.go:291] Setting OutFile to fd 1 ...
I0723 15:13:49.195286 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 15:13:49.195299 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:13:49.195306 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 15:13:49.195592 3714598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3501487/.minikube/bin
I0723 15:13:49.196010 3714598 out.go:298] Setting JSON to false
I0723 15:13:49.197281 3714598 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":86151,"bootTime":1721661479,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0723 15:13:49.197351 3714598 start.go:139] virtualization:
I0723 15:13:49.200469 3714598 out.go:177] * [old-k8s-version-808561] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0723 15:13:49.204186 3714598 out.go:177] - MINIKUBE_LOCATION=19319
I0723 15:13:49.204338 3714598 notify.go:220] Checking for updates...
I0723 15:13:49.208769 3714598 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0723 15:13:49.211662 3714598 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19319-3501487/kubeconfig
I0723 15:13:49.213939 3714598 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3501487/.minikube
I0723 15:13:49.216332 3714598 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0723 15:13:49.218832 3714598 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0723 15:13:49.221973 3714598 config.go:182] Loaded profile config "old-k8s-version-808561": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0723 15:13:49.224464 3714598 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
I0723 15:13:49.227377 3714598 driver.go:392] Setting default libvirt URI to qemu:///system
I0723 15:13:49.257150 3714598 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
I0723 15:13:49.257263 3714598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0723 15:13:49.318727 3714598 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2024-07-23 15:13:49.30951962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
I0723 15:13:49.318839 3714598 docker.go:307] overlay module found
I0723 15:13:49.321257 3714598 out.go:177] * Using the docker driver based on existing profile
I0723 15:13:49.323331 3714598 start.go:297] selected driver: docker
I0723 15:13:49.323350 3714598 start.go:901] validating driver "docker" against &{Name:old-k8s-version-808561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-808561 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0723 15:13:49.323471 3714598 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0723 15:13:49.324153 3714598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0723 15:13:49.378383 3714598 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2024-07-23 15:13:49.369330074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
I0723 15:13:49.378778 3714598 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0723 15:13:49.378843 3714598 cni.go:84] Creating CNI manager for ""
I0723 15:13:49.378858 3714598 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0723 15:13:49.378908 3714598 start.go:340] cluster config:
{Name:old-k8s-version-808561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-808561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0723 15:13:49.382408 3714598 out.go:177] * Starting "old-k8s-version-808561" primary control-plane node in "old-k8s-version-808561" cluster
I0723 15:13:49.384367 3714598 cache.go:121] Beginning downloading kic base image for docker with containerd
I0723 15:13:49.386208 3714598 out.go:177] * Pulling base image v0.0.44-1721687125-19319 ...
I0723 15:13:49.387983 3714598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0723 15:13:49.388034 3714598 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-3501487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0723 15:13:49.388046 3714598 cache.go:56] Caching tarball of preloaded images
I0723 15:13:49.388076 3714598 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local docker daemon
I0723 15:13:49.388126 3714598 preload.go:172] Found /home/jenkins/minikube-integration/19319-3501487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0723 15:13:49.388137 3714598 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0723 15:13:49.388255 3714598 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/config.json ...
W0723 15:13:49.407590 3714598 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae is of wrong architecture
I0723 15:13:49.407613 3714598 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
I0723 15:13:49.407710 3714598 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory
I0723 15:13:49.407735 3714598 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory, skipping pull
I0723 15:13:49.407746 3714598 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae exists in cache, skipping pull
I0723 15:13:49.407761 3714598 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae as a tarball
I0723 15:13:49.407770 3714598 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from local cache
I0723 15:13:49.531924 3714598 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from cached tarball
I0723 15:13:49.531960 3714598 cache.go:194] Successfully downloaded all kic artifacts
I0723 15:13:49.532010 3714598 start.go:360] acquireMachinesLock for old-k8s-version-808561: {Name:mkd1672ae7e01ed3d7a1c096dae8cc42d333ba78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0723 15:13:49.532075 3714598 start.go:364] duration metric: took 44.627µs to acquireMachinesLock for "old-k8s-version-808561"
I0723 15:13:49.532098 3714598 start.go:96] Skipping create...Using existing machine configuration
I0723 15:13:49.532104 3714598 fix.go:54] fixHost starting:
I0723 15:13:49.532455 3714598 cli_runner.go:164] Run: docker container inspect old-k8s-version-808561 --format={{.State.Status}}
I0723 15:13:49.548019 3714598 fix.go:112] recreateIfNeeded on old-k8s-version-808561: state=Stopped err=<nil>
W0723 15:13:49.548048 3714598 fix.go:138] unexpected machine state, will restart: <nil>
I0723 15:13:49.551972 3714598 out.go:177] * Restarting existing docker container for "old-k8s-version-808561" ...
I0723 15:13:49.554080 3714598 cli_runner.go:164] Run: docker start old-k8s-version-808561
I0723 15:13:49.864194 3714598 cli_runner.go:164] Run: docker container inspect old-k8s-version-808561 --format={{.State.Status}}
I0723 15:13:49.884843 3714598 kic.go:430] container "old-k8s-version-808561" state is running.
I0723 15:13:49.885241 3714598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-808561
I0723 15:13:49.907174 3714598 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/config.json ...
I0723 15:13:49.907408 3714598 machine.go:94] provisionDockerMachine start ...
I0723 15:13:49.907472 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:49.933216 3714598 main.go:141] libmachine: Using SSH client type: native
I0723 15:13:49.933480 3714598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 37471 <nil> <nil>}
I0723 15:13:49.933496 3714598 main.go:141] libmachine: About to run SSH command:
hostname
I0723 15:13:49.934249 3714598 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0723 15:13:53.072093 3714598 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-808561
I0723 15:13:53.072119 3714598 ubuntu.go:169] provisioning hostname "old-k8s-version-808561"
I0723 15:13:53.072194 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:53.089599 3714598 main.go:141] libmachine: Using SSH client type: native
I0723 15:13:53.089870 3714598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 37471 <nil> <nil>}
I0723 15:13:53.089887 3714598 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-808561 && echo "old-k8s-version-808561" | sudo tee /etc/hostname
I0723 15:13:53.224812 3714598 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-808561
I0723 15:13:53.224896 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:53.243742 3714598 main.go:141] libmachine: Using SSH client type: native
I0723 15:13:53.244162 3714598 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 37471 <nil> <nil>}
I0723 15:13:53.244182 3714598 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-808561' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-808561/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-808561' | sudo tee -a /etc/hosts;
fi
fi
I0723 15:13:53.368231 3714598 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0723 15:13:53.368260 3714598 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19319-3501487/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-3501487/.minikube}
I0723 15:13:53.368350 3714598 ubuntu.go:177] setting up certificates
I0723 15:13:53.368373 3714598 provision.go:84] configureAuth start
I0723 15:13:53.368439 3714598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-808561
I0723 15:13:53.384823 3714598 provision.go:143] copyHostCerts
I0723 15:13:53.384894 3714598 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.pem, removing ...
I0723 15:13:53.384907 3714598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.pem
I0723 15:13:53.384991 3714598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.pem (1082 bytes)
I0723 15:13:53.385102 3714598 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3501487/.minikube/cert.pem, removing ...
I0723 15:13:53.385118 3714598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3501487/.minikube/cert.pem
I0723 15:13:53.385150 3714598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-3501487/.minikube/cert.pem (1123 bytes)
I0723 15:13:53.385254 3714598 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3501487/.minikube/key.pem, removing ...
I0723 15:13:53.385265 3714598 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3501487/.minikube/key.pem
I0723 15:13:53.385290 3714598 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-3501487/.minikube/key.pem (1679 bytes)
I0723 15:13:53.385363 3714598 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-808561 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-808561]
I0723 15:13:53.851861 3714598 provision.go:177] copyRemoteCerts
I0723 15:13:53.851929 3714598 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0723 15:13:53.851983 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:53.870242 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:53.960994 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0723 15:13:53.986029 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0723 15:13:54.014408 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0723 15:13:54.043570 3714598 provision.go:87] duration metric: took 675.175873ms to configureAuth
I0723 15:13:54.043641 3714598 ubuntu.go:193] setting minikube options for container-runtime
I0723 15:13:54.043864 3714598 config.go:182] Loaded profile config "old-k8s-version-808561": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0723 15:13:54.043877 3714598 machine.go:97] duration metric: took 4.136461522s to provisionDockerMachine
I0723 15:13:54.043886 3714598 start.go:293] postStartSetup for "old-k8s-version-808561" (driver="docker")
I0723 15:13:54.043897 3714598 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0723 15:13:54.043953 3714598 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0723 15:13:54.044004 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:54.061357 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:54.153458 3714598 ssh_runner.go:195] Run: cat /etc/os-release
I0723 15:13:54.156569 3714598 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0723 15:13:54.156610 3714598 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0723 15:13:54.156629 3714598 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0723 15:13:54.156638 3714598 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0723 15:13:54.156651 3714598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3501487/.minikube/addons for local assets ...
I0723 15:13:54.156711 3714598 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3501487/.minikube/files for local assets ...
I0723 15:13:54.156798 3714598 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-3501487/.minikube/files/etc/ssl/certs/35068982.pem -> 35068982.pem in /etc/ssl/certs
I0723 15:13:54.156904 3714598 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0723 15:13:54.165939 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/files/etc/ssl/certs/35068982.pem --> /etc/ssl/certs/35068982.pem (1708 bytes)
I0723 15:13:54.191249 3714598 start.go:296] duration metric: took 147.348347ms for postStartSetup
I0723 15:13:54.191379 3714598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0723 15:13:54.191450 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:54.212622 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:54.301877 3714598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0723 15:13:54.306757 3714598 fix.go:56] duration metric: took 4.774644873s for fixHost
I0723 15:13:54.306782 3714598 start.go:83] releasing machines lock for "old-k8s-version-808561", held for 4.774699485s
I0723 15:13:54.306857 3714598 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-808561
I0723 15:13:54.323545 3714598 ssh_runner.go:195] Run: cat /version.json
I0723 15:13:54.323602 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:54.323920 3714598 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0723 15:13:54.323977 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:54.346924 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:54.353365 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:54.612905 3714598 ssh_runner.go:195] Run: systemctl --version
I0723 15:13:54.617739 3714598 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0723 15:13:54.625743 3714598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0723 15:13:54.644737 3714598 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0723 15:13:54.644893 3714598 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0723 15:13:54.653716 3714598 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0723 15:13:54.653741 3714598 start.go:495] detecting cgroup driver to use...
I0723 15:13:54.653771 3714598 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0723 15:13:54.653824 3714598 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0723 15:13:54.673114 3714598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0723 15:13:54.685094 3714598 docker.go:217] disabling cri-docker service (if available) ...
I0723 15:13:54.685157 3714598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0723 15:13:54.698237 3714598 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0723 15:13:54.709709 3714598 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0723 15:13:54.793199 3714598 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0723 15:13:54.887563 3714598 docker.go:233] disabling docker service ...
I0723 15:13:54.887661 3714598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0723 15:13:54.901238 3714598 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0723 15:13:54.912885 3714598 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0723 15:13:55.014950 3714598 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0723 15:13:55.114049 3714598 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0723 15:13:55.126646 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0723 15:13:55.143740 3714598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0723 15:13:55.156093 3714598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0723 15:13:55.166705 3714598 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0723 15:13:55.166779 3714598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0723 15:13:55.177084 3714598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0723 15:13:55.187281 3714598 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0723 15:13:55.197731 3714598 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0723 15:13:55.207741 3714598 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0723 15:13:55.217831 3714598 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0723 15:13:55.227924 3714598 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0723 15:13:55.237538 3714598 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0723 15:13:55.246327 3714598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0723 15:13:55.326932 3714598 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0723 15:13:55.511744 3714598 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0723 15:13:55.511886 3714598 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0723 15:13:55.517867 3714598 start.go:563] Will wait 60s for crictl version
I0723 15:13:55.517976 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:13:55.523260 3714598 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0723 15:13:55.576373 3714598 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.19
RuntimeApiVersion: v1
I0723 15:13:55.576482 3714598 ssh_runner.go:195] Run: containerd --version
I0723 15:13:55.602456 3714598 ssh_runner.go:195] Run: containerd --version
I0723 15:13:55.635008 3714598 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
I0723 15:13:55.636969 3714598 cli_runner.go:164] Run: docker network inspect old-k8s-version-808561 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0723 15:13:55.652359 3714598 ssh_runner.go:195] Run: grep 192.168.94.1 host.minikube.internal$ /etc/hosts
I0723 15:13:55.655921 3714598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0723 15:13:55.666935 3714598 kubeadm.go:883] updating cluster {Name:old-k8s-version-808561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-808561 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0723 15:13:55.667076 3714598 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0723 15:13:55.667140 3714598 ssh_runner.go:195] Run: sudo crictl images --output json
I0723 15:13:55.704084 3714598 containerd.go:627] all images are preloaded for containerd runtime.
I0723 15:13:55.704105 3714598 containerd.go:534] Images already preloaded, skipping extraction
I0723 15:13:55.704165 3714598 ssh_runner.go:195] Run: sudo crictl images --output json
I0723 15:13:55.741938 3714598 containerd.go:627] all images are preloaded for containerd runtime.
I0723 15:13:55.741963 3714598 cache_images.go:84] Images are preloaded, skipping loading
I0723 15:13:55.741971 3714598 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.20.0 containerd true true} ...
I0723 15:13:55.742098 3714598 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-808561 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.94.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-808561 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0723 15:13:55.742176 3714598 ssh_runner.go:195] Run: sudo crictl info
I0723 15:13:55.782509 3714598 cni.go:84] Creating CNI manager for ""
I0723 15:13:55.782533 3714598 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0723 15:13:55.782542 3714598 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0723 15:13:55.782585 3714598 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-808561 NodeName:old-k8s-version-808561 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0723 15:13:55.782752 3714598 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.94.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-808561"
kubeletExtraArgs:
node-ip: 192.168.94.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0723 15:13:55.782825 3714598 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0723 15:13:55.791883 3714598 binaries.go:44] Found k8s binaries, skipping transfer
I0723 15:13:55.791951 3714598 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0723 15:13:55.800886 3714598 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0723 15:13:55.819762 3714598 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0723 15:13:55.840731 3714598 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0723 15:13:55.861172 3714598 ssh_runner.go:195] Run: grep 192.168.94.2 control-plane.minikube.internal$ /etc/hosts
I0723 15:13:55.868860 3714598 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0723 15:13:55.879712 3714598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0723 15:13:55.967046 3714598 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0723 15:13:55.981346 3714598 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561 for IP: 192.168.94.2
I0723 15:13:55.981418 3714598 certs.go:194] generating shared ca certs ...
I0723 15:13:55.981449 3714598 certs.go:226] acquiring lock for ca certs: {Name:mke9a16e2fca4d99d18822e41138928c0b1feaa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:13:55.981625 3714598 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.key
I0723 15:13:55.981691 3714598 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/proxy-client-ca.key
I0723 15:13:55.981712 3714598 certs.go:256] generating profile certs ...
I0723 15:13:55.981841 3714598 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/client.key
I0723 15:13:55.981951 3714598 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/apiserver.key.dc5c0a7c
I0723 15:13:55.982017 3714598 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/proxy-client.key
I0723 15:13:55.982158 3714598 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/3506898.pem (1338 bytes)
W0723 15:13:55.982210 3714598 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/3506898_empty.pem, impossibly tiny 0 bytes
I0723 15:13:55.982232 3714598 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca-key.pem (1679 bytes)
I0723 15:13:55.982288 3714598 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem (1082 bytes)
I0723 15:13:55.982335 3714598 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/cert.pem (1123 bytes)
I0723 15:13:55.982389 3714598 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/key.pem (1679 bytes)
I0723 15:13:55.982460 3714598 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/files/etc/ssl/certs/35068982.pem (1708 bytes)
I0723 15:13:55.983114 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0723 15:13:56.012430 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0723 15:13:56.043956 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0723 15:13:56.071843 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0723 15:13:56.101865 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0723 15:13:56.134214 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0723 15:13:56.159310 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0723 15:13:56.184682 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/old-k8s-version-808561/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0723 15:13:56.209213 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/files/etc/ssl/certs/35068982.pem --> /usr/share/ca-certificates/35068982.pem (1708 bytes)
I0723 15:13:56.234474 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0723 15:13:56.259214 3714598 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/3506898.pem --> /usr/share/ca-certificates/3506898.pem (1338 bytes)
I0723 15:13:56.285098 3714598 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0723 15:13:56.302546 3714598 ssh_runner.go:195] Run: openssl version
I0723 15:13:56.307732 3714598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35068982.pem && ln -fs /usr/share/ca-certificates/35068982.pem /etc/ssl/certs/35068982.pem"
I0723 15:13:56.317074 3714598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35068982.pem
I0723 15:13:56.320591 3714598 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:33 /usr/share/ca-certificates/35068982.pem
I0723 15:13:56.320654 3714598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35068982.pem
I0723 15:13:56.327469 3714598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35068982.pem /etc/ssl/certs/3ec20f2e.0"
I0723 15:13:56.336567 3714598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0723 15:13:56.345745 3714598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0723 15:13:56.349573 3714598 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 14:25 /usr/share/ca-certificates/minikubeCA.pem
I0723 15:13:56.349654 3714598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0723 15:13:56.356353 3714598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0723 15:13:56.364944 3714598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3506898.pem && ln -fs /usr/share/ca-certificates/3506898.pem /etc/ssl/certs/3506898.pem"
I0723 15:13:56.373866 3714598 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3506898.pem
I0723 15:13:56.377612 3714598 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:33 /usr/share/ca-certificates/3506898.pem
I0723 15:13:56.377673 3714598 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3506898.pem
I0723 15:13:56.384560 3714598 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3506898.pem /etc/ssl/certs/51391683.0"
I0723 15:13:56.393460 3714598 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0723 15:13:56.397035 3714598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0723 15:13:56.403815 3714598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0723 15:13:56.410829 3714598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0723 15:13:56.417604 3714598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0723 15:13:56.425356 3714598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0723 15:13:56.432495 3714598 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0723 15:13:56.440002 3714598 kubeadm.go:392] StartCluster: {Name:old-k8s-version-808561 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-808561 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0723 15:13:56.440122 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0723 15:13:56.440229 3714598 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0723 15:13:56.477849 3714598 cri.go:89] found id: "9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:13:56.477874 3714598 cri.go:89] found id: "abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:13:56.477880 3714598 cri.go:89] found id: "27b49add5a2e9e8baa247ed956308806cd4aad52b131a415790feca3d30db679"
I0723 15:13:56.477884 3714598 cri.go:89] found id: "c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:13:56.477887 3714598 cri.go:89] found id: "38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:13:56.477890 3714598 cri.go:89] found id: "8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:13:56.477894 3714598 cri.go:89] found id: "2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:13:56.477897 3714598 cri.go:89] found id: "3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:13:56.477924 3714598 cri.go:89] found id: ""
I0723 15:13:56.477978 3714598 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0723 15:13:56.489965 3714598 cri.go:116] JSON = null
W0723 15:13:56.490040 3714598 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I0723 15:13:56.490135 3714598 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0723 15:13:56.499105 3714598 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0723 15:13:56.499125 3714598 kubeadm.go:593] restartPrimaryControlPlane start ...
I0723 15:13:56.499183 3714598 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0723 15:13:56.509065 3714598 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0723 15:13:56.509709 3714598 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-808561" does not appear in /home/jenkins/minikube-integration/19319-3501487/kubeconfig
I0723 15:13:56.510014 3714598 kubeconfig.go:62] /home/jenkins/minikube-integration/19319-3501487/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-808561" cluster setting kubeconfig missing "old-k8s-version-808561" context setting]
I0723 15:13:56.510476 3714598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/kubeconfig: {Name:mk28c68c9d9b78842c0266c09085cd617f54ca70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:13:56.511887 3714598 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0723 15:13:56.523301 3714598 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
I0723 15:13:56.523375 3714598 kubeadm.go:597] duration metric: took 24.242772ms to restartPrimaryControlPlane
I0723 15:13:56.523400 3714598 kubeadm.go:394] duration metric: took 83.406658ms to StartCluster
I0723 15:13:56.523450 3714598 settings.go:142] acquiring lock: {Name:mk139a8165d464eadea1fdaad6cd0d3bdc374703 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:13:56.523531 3714598 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19319-3501487/kubeconfig
I0723 15:13:56.524524 3714598 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/kubeconfig: {Name:mk28c68c9d9b78842c0266c09085cd617f54ca70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:13:56.524802 3714598 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0723 15:13:56.525068 3714598 config.go:182] Loaded profile config "old-k8s-version-808561": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0723 15:13:56.525145 3714598 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0723 15:13:56.525260 3714598 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-808561"
I0723 15:13:56.525299 3714598 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-808561"
W0723 15:13:56.525328 3714598 addons.go:243] addon storage-provisioner should already be in state true
I0723 15:13:56.525346 3714598 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-808561"
I0723 15:13:56.525390 3714598 host.go:66] Checking if "old-k8s-version-808561" exists ...
I0723 15:13:56.525333 3714598 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-808561"
I0723 15:13:56.525521 3714598 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-808561"
I0723 15:13:56.525859 3714598 cli_runner.go:164] Run: docker container inspect old-k8s-version-808561 --format={{.State.Status}}
I0723 15:13:56.525901 3714598 cli_runner.go:164] Run: docker container inspect old-k8s-version-808561 --format={{.State.Status}}
I0723 15:13:56.525341 3714598 addons.go:69] Setting dashboard=true in profile "old-k8s-version-808561"
I0723 15:13:56.526322 3714598 addons.go:234] Setting addon dashboard=true in "old-k8s-version-808561"
W0723 15:13:56.526338 3714598 addons.go:243] addon dashboard should already be in state true
I0723 15:13:56.526370 3714598 host.go:66] Checking if "old-k8s-version-808561" exists ...
I0723 15:13:56.526757 3714598 cli_runner.go:164] Run: docker container inspect old-k8s-version-808561 --format={{.State.Status}}
I0723 15:13:56.525402 3714598 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-808561"
W0723 15:13:56.527002 3714598 addons.go:243] addon metrics-server should already be in state true
I0723 15:13:56.527026 3714598 host.go:66] Checking if "old-k8s-version-808561" exists ...
I0723 15:13:56.527399 3714598 cli_runner.go:164] Run: docker container inspect old-k8s-version-808561 --format={{.State.Status}}
I0723 15:13:56.530764 3714598 out.go:177] * Verifying Kubernetes components...
I0723 15:13:56.533103 3714598 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0723 15:13:56.581446 3714598 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0723 15:13:56.581463 3714598 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0723 15:13:56.584203 3714598 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0723 15:13:56.584291 3714598 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0723 15:13:56.584317 3714598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0723 15:13:56.584398 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:56.586873 3714598 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-808561"
W0723 15:13:56.586895 3714598 addons.go:243] addon default-storageclass should already be in state true
I0723 15:13:56.586919 3714598 host.go:66] Checking if "old-k8s-version-808561" exists ...
I0723 15:13:56.587319 3714598 cli_runner.go:164] Run: docker container inspect old-k8s-version-808561 --format={{.State.Status}}
I0723 15:13:56.587695 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0723 15:13:56.587713 3714598 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0723 15:13:56.587760 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:56.616843 3714598 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0723 15:13:56.616865 3714598 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0723 15:13:56.616938 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:56.618039 3714598 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0723 15:13:56.619887 3714598 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0723 15:13:56.619907 3714598 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0723 15:13:56.619971 3714598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-808561
I0723 15:13:56.645821 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:56.685972 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:56.698175 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:56.699477 3714598 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37471 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/old-k8s-version-808561/id_rsa Username:docker}
I0723 15:13:56.710481 3714598 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0723 15:13:56.738001 3714598 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-808561" to be "Ready" ...
I0723 15:13:56.771260 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0723 15:13:56.833205 3714598 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0723 15:13:56.833280 3714598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0723 15:13:56.859002 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0723 15:13:56.859075 3714598 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0723 15:13:56.872775 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0723 15:13:56.887943 3714598 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0723 15:13:56.888031 3714598 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0723 15:13:56.909956 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0723 15:13:56.910036 3714598 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
W0723 15:13:56.933163 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:56.933253 3714598 retry.go:31] will retry after 148.623111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:56.959237 3714598 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0723 15:13:56.959313 3714598 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0723 15:13:56.979489 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0723 15:13:56.979563 3714598 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0723 15:13:56.982861 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0723 15:13:57.014228 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.014311 3714598 retry.go:31] will retry after 292.224952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.033092 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0723 15:13:57.033168 3714598 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0723 15:13:57.054649 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0723 15:13:57.054731 3714598 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0723 15:13:57.076554 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0723 15:13:57.076638 3714598 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0723 15:13:57.083014 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0723 15:13:57.098265 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.098298 3714598 retry.go:31] will retry after 173.350351ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.107209 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0723 15:13:57.107287 3714598 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0723 15:13:57.130692 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0723 15:13:57.130719 3714598 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0723 15:13:57.152538 3714598 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0723 15:13:57.152560 3714598 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0723 15:13:57.174561 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0723 15:13:57.192098 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.192179 3714598 retry.go:31] will retry after 205.75205ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0723 15:13:57.257423 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.257465 3714598 retry.go:31] will retry after 205.389556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.272618 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0723 15:13:57.307078 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0723 15:13:57.367578 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.367611 3714598 retry.go:31] will retry after 311.758885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.398920 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0723 15:13:57.405016 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.405120 3714598 retry.go:31] will retry after 229.445756ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.463387 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0723 15:13:57.478214 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.478249 3714598 retry.go:31] will retry after 501.641468ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0723 15:13:57.540569 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.540605 3714598 retry.go:31] will retry after 346.628097ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.634960 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0723 15:13:57.680382 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0723 15:13:57.719417 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.719450 3714598 retry.go:31] will retry after 388.922153ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0723 15:13:57.763408 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.763442 3714598 retry.go:31] will retry after 470.024822ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.887676 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0723 15:13:57.962814 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.962860 3714598 retry.go:31] will retry after 533.120924ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:57.981031 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0723 15:13:58.055515 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.055554 3714598 retry.go:31] will retry after 924.850721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.108716 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0723 15:13:58.184452 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.184486 3714598 retry.go:31] will retry after 1.008536617s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.233679 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0723 15:13:58.304242 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.304280 3714598 retry.go:31] will retry after 454.528249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.496894 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0723 15:13:58.596001 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.596034 3714598 retry.go:31] will retry after 1.076829587s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.738543 3714598 node_ready.go:53] error getting node "old-k8s-version-808561": Get "https://192.168.94.2:8443/api/v1/nodes/old-k8s-version-808561": dial tcp 192.168.94.2:8443: connect: connection refused
I0723 15:13:58.759824 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0723 15:13:58.856729 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.856773 3714598 retry.go:31] will retry after 1.383855197s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:58.981061 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0723 15:13:59.072645 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:59.072685 3714598 retry.go:31] will retry after 1.862838261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:59.194087 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0723 15:13:59.265312 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:59.265345 3714598 retry.go:31] will retry after 1.678958345s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:59.673481 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0723 15:13:59.741974 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:13:59.742004 3714598 retry.go:31] will retry after 1.257330037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:00.248118 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0723 15:14:00.398400 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:00.398504 3714598 retry.go:31] will retry after 2.464701611s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:00.738989 3714598 node_ready.go:53] error getting node "old-k8s-version-808561": Get "https://192.168.94.2:8443/api/v1/nodes/old-k8s-version-808561": dial tcp 192.168.94.2:8443: connect: connection refused
I0723 15:14:00.936378 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0723 15:14:00.944734 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0723 15:14:00.999996 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0723 15:14:01.027243 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:01.027323 3714598 retry.go:31] will retry after 2.257466508s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0723 15:14:01.057248 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:01.057327 3714598 retry.go:31] will retry after 2.360155873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0723 15:14:01.101177 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:01.101258 3714598 retry.go:31] will retry after 2.398778591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:02.739513 3714598 node_ready.go:53] error getting node "old-k8s-version-808561": Get "https://192.168.94.2:8443/api/v1/nodes/old-k8s-version-808561": dial tcp 192.168.94.2:8443: connect: connection refused
I0723 15:14:02.863885 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0723 15:14:02.952214 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:02.952253 3714598 retry.go:31] will retry after 2.21125633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:03.285176 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0723 15:14:03.362810 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:03.362846 3714598 retry.go:31] will retry after 3.637297132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:03.417989 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0723 15:14:03.490835 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:03.490868 3714598 retry.go:31] will retry after 3.32088001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:03.501021 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0723 15:14:03.576806 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:03.576837 3714598 retry.go:31] will retry after 2.015217281s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:05.163979 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0723 15:14:05.238642 3714598 node_ready.go:53] error getting node "old-k8s-version-808561": Get "https://192.168.94.2:8443/api/v1/nodes/old-k8s-version-808561": dial tcp 192.168.94.2:8443: connect: connection refused
W0723 15:14:05.355666 3714598 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:05.355694 3714598 retry.go:31] will retry after 5.059855896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0723 15:14:05.593164 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0723 15:14:06.812010 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0723 15:14:07.000415 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0723 15:14:10.416479 3714598 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0723 15:14:13.985098 3714598 node_ready.go:49] node "old-k8s-version-808561" has status "Ready":"True"
I0723 15:14:13.985125 3714598 node_ready.go:38] duration metric: took 17.247085435s for node "old-k8s-version-808561" to be "Ready" ...
I0723 15:14:13.985135 3714598 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0723 15:14:14.189831 3714598 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-shrvw" in "kube-system" namespace to be "Ready" ...
I0723 15:14:14.268052 3714598 pod_ready.go:92] pod "coredns-74ff55c5b-shrvw" in "kube-system" namespace has status "Ready":"True"
I0723 15:14:14.268128 3714598 pod_ready.go:81] duration metric: took 78.218964ms for pod "coredns-74ff55c5b-shrvw" in "kube-system" namespace to be "Ready" ...
I0723 15:14:14.268157 3714598 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-808561" in "kube-system" namespace to be "Ready" ...
I0723 15:14:14.339322 3714598 pod_ready.go:92] pod "etcd-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"True"
I0723 15:14:14.339395 3714598 pod_ready.go:81] duration metric: took 71.218584ms for pod "etcd-old-k8s-version-808561" in "kube-system" namespace to be "Ready" ...
I0723 15:14:14.339426 3714598 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-808561" in "kube-system" namespace to be "Ready" ...
I0723 15:14:15.260042 3714598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.666815024s)
I0723 15:14:15.260276 3714598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.448233524s)
I0723 15:14:15.260427 3714598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.25998728s)
I0723 15:14:15.260531 3714598 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.844016736s)
I0723 15:14:15.260566 3714598 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-808561"
I0723 15:14:15.262306 3714598 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-808561 addons enable metrics-server
I0723 15:14:15.277851 3714598 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I0723 15:14:15.279809 3714598 addons.go:510] duration metric: took 18.754655706s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I0723 15:14:16.349304 3714598 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:18.845245 3714598 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:20.845782 3714598 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:22.849296 3714598 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:25.345338 3714598 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"True"
I0723 15:14:25.345366 3714598 pod_ready.go:81] duration metric: took 11.005912047s for pod "kube-apiserver-old-k8s-version-808561" in "kube-system" namespace to be "Ready" ...
I0723 15:14:25.345378 3714598 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace to be "Ready" ...
I0723 15:14:27.351586 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:29.352992 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:31.481793 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:33.905027 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:36.352653 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:38.970193 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:41.352274 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:43.354075 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:45.858986 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:47.893158 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:50.352903 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:52.852892 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:55.351702 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:14:57.851630 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:00.377025 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:02.852188 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:05.351803 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:07.352000 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:09.851683 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:11.852558 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:14.353516 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:16.852268 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:18.852420 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:21.351650 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:23.352142 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:25.851283 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:27.852398 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:29.853068 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:32.351078 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:34.418632 3714598 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:34.852918 3714598 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"True"
I0723 15:15:34.852948 3714598 pod_ready.go:81] duration metric: took 1m9.507557986s for pod "kube-controller-manager-old-k8s-version-808561" in "kube-system" namespace to be "Ready" ...
I0723 15:15:34.852961 3714598 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7tf2r" in "kube-system" namespace to be "Ready" ...
I0723 15:15:34.858547 3714598 pod_ready.go:92] pod "kube-proxy-7tf2r" in "kube-system" namespace has status "Ready":"True"
I0723 15:15:34.858577 3714598 pod_ready.go:81] duration metric: took 5.603572ms for pod "kube-proxy-7tf2r" in "kube-system" namespace to be "Ready" ...
I0723 15:15:34.858595 3714598 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-808561" in "kube-system" namespace to be "Ready" ...
I0723 15:15:36.863983 3714598 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:37.864856 3714598 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-808561" in "kube-system" namespace has status "Ready":"True"
I0723 15:15:37.864884 3714598 pod_ready.go:81] duration metric: took 3.006280732s for pod "kube-scheduler-old-k8s-version-808561" in "kube-system" namespace to be "Ready" ...
I0723 15:15:37.864896 3714598 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace to be "Ready" ...
I0723 15:15:39.871356 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:42.372162 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:44.871071 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:47.370865 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:49.870745 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:51.871279 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:54.372224 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:56.875005 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:15:59.370637 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:01.371513 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:03.871152 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:06.371163 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:08.871007 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:10.871655 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:13.370962 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:15.870441 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:17.871329 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:20.370525 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:22.870971 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:25.371964 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:27.871458 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:30.370355 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:32.872402 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:35.370811 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:37.371342 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:39.372069 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:41.870479 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:43.871363 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:45.871794 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:48.370618 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:50.370793 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:52.371234 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:54.871784 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:57.371247 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:16:59.379455 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:01.381143 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:03.382602 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:05.871234 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:07.872856 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:10.376602 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:12.870942 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:14.871009 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:17.371028 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:19.871508 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:21.871691 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:24.370252 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:26.871583 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:28.871714 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:31.371329 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:33.371386 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:35.408868 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:37.871664 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:40.371616 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:42.372835 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:44.871123 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:47.370493 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:49.371692 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:51.871609 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:54.371575 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:56.870883 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:17:58.873002 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:01.371396 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:03.375025 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:05.871331 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:07.871904 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:09.872761 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:12.371306 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:14.871063 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:16.871164 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:18.871533 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:21.371161 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:23.371383 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:25.870753 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:27.871935 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:30.371281 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:32.871922 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:35.371054 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:37.872040 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:40.371257 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:42.871911 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:44.872671 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:47.370961 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:49.871744 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:51.873464 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:54.371088 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:56.871740 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:18:58.872376 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:01.371403 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:03.872076 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:06.371421 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:08.871884 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:10.872619 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:12.874756 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:15.392173 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:17.873922 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:20.370392 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:22.371785 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:24.373992 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:26.408441 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:28.873586 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:31.372498 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:33.872284 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:36.371249 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:37.871829 3714598 pod_ready.go:81] duration metric: took 4m0.006918868s for pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace to be "Ready" ...
E0723 15:19:37.871864 3714598 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0723 15:19:37.871873 3714598 pod_ready.go:38] duration metric: took 5m23.886727323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0723 15:19:37.871886 3714598 api_server.go:52] waiting for apiserver process to appear ...
I0723 15:19:37.871916 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0723 15:19:37.871982 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0723 15:19:37.944728 3714598 cri.go:89] found id: "3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b"
I0723 15:19:37.944753 3714598 cri.go:89] found id: "3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:19:37.944758 3714598 cri.go:89] found id: ""
I0723 15:19:37.944765 3714598 logs.go:276] 2 containers: [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b 3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02]
I0723 15:19:37.944824 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:37.956790 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:37.960739 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0723 15:19:37.960812 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0723 15:19:38.023180 3714598 cri.go:89] found id: "e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69"
I0723 15:19:38.023207 3714598 cri.go:89] found id: "38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:19:38.023212 3714598 cri.go:89] found id: ""
I0723 15:19:38.023220 3714598 logs.go:276] 2 containers: [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69 38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad]
I0723 15:19:38.023287 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.028272 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.032944 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0723 15:19:38.033029 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0723 15:19:38.108278 3714598 cri.go:89] found id: "21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7"
I0723 15:19:38.108321 3714598 cri.go:89] found id: "9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:19:38.108327 3714598 cri.go:89] found id: ""
I0723 15:19:38.108335 3714598 logs.go:276] 2 containers: [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7 9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b]
I0723 15:19:38.108394 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.116642 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.121037 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0723 15:19:38.121113 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0723 15:19:38.200574 3714598 cri.go:89] found id: "2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a"
I0723 15:19:38.200599 3714598 cri.go:89] found id: "2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:19:38.200604 3714598 cri.go:89] found id: ""
I0723 15:19:38.200612 3714598 logs.go:276] 2 containers: [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a 2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999]
I0723 15:19:38.200669 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.206112 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.230286 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0723 15:19:38.230410 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0723 15:19:38.324505 3714598 cri.go:89] found id: "77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c"
I0723 15:19:38.324529 3714598 cri.go:89] found id: "c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:19:38.324534 3714598 cri.go:89] found id: ""
I0723 15:19:38.324542 3714598 logs.go:276] 2 containers: [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30]
I0723 15:19:38.324602 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.330443 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.334064 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0723 15:19:38.334147 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0723 15:19:38.392951 3714598 cri.go:89] found id: "04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876"
I0723 15:19:38.393029 3714598 cri.go:89] found id: "8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:19:38.393047 3714598 cri.go:89] found id: ""
I0723 15:19:38.393086 3714598 logs.go:276] 2 containers: [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876 8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd]
I0723 15:19:38.393173 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.400994 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.406747 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0723 15:19:38.406869 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0723 15:19:38.512283 3714598 cri.go:89] found id: "8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac"
I0723 15:19:38.512364 3714598 cri.go:89] found id: "abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:19:38.512382 3714598 cri.go:89] found id: ""
I0723 15:19:38.512402 3714598 logs.go:276] 2 containers: [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224]
I0723 15:19:38.512485 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.520453 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.524098 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0723 15:19:38.524214 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0723 15:19:38.601870 3714598 cri.go:89] found id: "c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8"
I0723 15:19:38.601940 3714598 cri.go:89] found id: ""
I0723 15:19:38.601960 3714598 logs.go:276] 1 containers: [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8]
I0723 15:19:38.602086 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.607099 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0723 15:19:38.607217 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0723 15:19:38.713066 3714598 cri.go:89] found id: "a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2"
I0723 15:19:38.713092 3714598 cri.go:89] found id: "9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6"
I0723 15:19:38.713097 3714598 cri.go:89] found id: ""
I0723 15:19:38.713105 3714598 logs.go:276] 2 containers: [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2 9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6]
I0723 15:19:38.713175 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.717141 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.721751 3714598 logs.go:123] Gathering logs for kube-proxy [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c] ...
I0723 15:19:38.721777 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c"
I0723 15:19:38.797009 3714598 logs.go:123] Gathering logs for kubernetes-dashboard [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8] ...
I0723 15:19:38.797086 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8"
I0723 15:19:38.861327 3714598 logs.go:123] Gathering logs for storage-provisioner [9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6] ...
I0723 15:19:38.861398 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6"
I0723 15:19:38.911530 3714598 logs.go:123] Gathering logs for containerd ...
I0723 15:19:38.911600 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0723 15:19:38.978921 3714598 logs.go:123] Gathering logs for container status ...
I0723 15:19:38.979018 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0723 15:19:39.058845 3714598 logs.go:123] Gathering logs for describe nodes ...
I0723 15:19:39.059112 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0723 15:19:39.386817 3714598 logs.go:123] Gathering logs for etcd [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69] ...
I0723 15:19:39.386849 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69"
I0723 15:19:39.461694 3714598 logs.go:123] Gathering logs for etcd [38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad] ...
I0723 15:19:39.461728 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:19:39.550639 3714598 logs.go:123] Gathering logs for coredns [9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b] ...
I0723 15:19:39.550672 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:19:39.685274 3714598 logs.go:123] Gathering logs for kube-scheduler [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a] ...
I0723 15:19:39.685302 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a"
I0723 15:19:39.767623 3714598 logs.go:123] Gathering logs for kube-controller-manager [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876] ...
I0723 15:19:39.767658 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876"
I0723 15:19:39.848209 3714598 logs.go:123] Gathering logs for kubelet ...
I0723 15:19:39.848249 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0723 15:19:39.905588 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997695 651 reflector.go:138] object-"kube-system"/"metrics-server-token-555md": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-555md" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.905840 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997781 651 reflector.go:138] object-"kube-system"/"storage-provisioner-token-52k6k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-52k6k" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906056 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997861 651 reflector.go:138] object-"kube-system"/"coredns-token-jjvjk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jjvjk" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906259 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997906 651 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906478 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.998074 651 reflector.go:138] object-"kube-system"/"kube-proxy-token-s27dt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s27dt" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906685 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.998109 651 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906896 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:14.006437 651 reflector.go:138] object-"kube-system"/"kindnet-token-s2lzs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-s2lzs" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.907120 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:14.008948 651 reflector.go:138] object-"default"/"default-token-77r4d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-77r4d" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.918723 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:16 old-k8s-version-808561 kubelet[651]: E0723 15:14:16.784630 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.918922 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:17 old-k8s-version-808561 kubelet[651]: E0723 15:14:17.688046 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.921852 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:31 old-k8s-version-808561 kubelet[651]: E0723 15:14:31.509921 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.924131 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.489157 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.924611 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.834885 651 pod_workers.go:191] Error syncing pod 696d8a65-c479-4c8f-80f4-2d9b92600046 ("storage-provisioner_kube-system(696d8a65-c479-4c8f-80f4-2d9b92600046)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(696d8a65-c479-4c8f-80f4-2d9b92600046)"
W0723 15:19:39.924951 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.850656 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.925451 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:47 old-k8s-version-808561 kubelet[651]: E0723 15:14:47.855510 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.925788 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:51 old-k8s-version-808561 kubelet[651]: E0723 15:14:51.896987 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.928719 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:59 old-k8s-version-808561 kubelet[651]: E0723 15:14:59.501566 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.929323 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:06 old-k8s-version-808561 kubelet[651]: E0723 15:15:06.932141 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.929512 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:11 old-k8s-version-808561 kubelet[651]: E0723 15:15:11.484662 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.929843 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:11 old-k8s-version-808561 kubelet[651]: E0723 15:15:11.896849 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.930030 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:22 old-k8s-version-808561 kubelet[651]: E0723 15:15:22.484956 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.930365 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:23 old-k8s-version-808561 kubelet[651]: E0723 15:15:23.484104 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.930551 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:35 old-k8s-version-808561 kubelet[651]: E0723 15:15:35.487200 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.931143 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:37 old-k8s-version-808561 kubelet[651]: E0723 15:15:37.008061 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.931481 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:41 old-k8s-version-808561 kubelet[651]: E0723 15:15:41.896584 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.934021 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:49 old-k8s-version-808561 kubelet[651]: E0723 15:15:49.495729 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.934361 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:53 old-k8s-version-808561 kubelet[651]: E0723 15:15:53.484024 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.934550 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:04 old-k8s-version-808561 kubelet[651]: E0723 15:16:04.492638 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.934880 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:08 old-k8s-version-808561 kubelet[651]: E0723 15:16:08.489645 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.935070 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:19 old-k8s-version-808561 kubelet[651]: E0723 15:16:19.484440 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.935675 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:24 old-k8s-version-808561 kubelet[651]: E0723 15:16:24.156639 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.936007 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:31 old-k8s-version-808561 kubelet[651]: E0723 15:16:31.896663 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.936193 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:32 old-k8s-version-808561 kubelet[651]: E0723 15:16:32.484517 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.936563 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:44 old-k8s-version-808561 kubelet[651]: E0723 15:16:44.485094 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.936754 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:47 old-k8s-version-808561 kubelet[651]: E0723 15:16:47.484454 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.937089 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:57 old-k8s-version-808561 kubelet[651]: E0723 15:16:57.484249 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.937275 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:00 old-k8s-version-808561 kubelet[651]: E0723 15:17:00.498327 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.937608 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:09 old-k8s-version-808561 kubelet[651]: E0723 15:17:09.484011 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.940643 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:12 old-k8s-version-808561 kubelet[651]: E0723 15:17:12.496276 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.941002 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:24 old-k8s-version-808561 kubelet[651]: E0723 15:17:24.485118 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.941191 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:26 old-k8s-version-808561 kubelet[651]: E0723 15:17:26.486450 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.941525 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:37 old-k8s-version-808561 kubelet[651]: E0723 15:17:37.484898 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.941710 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:37 old-k8s-version-808561 kubelet[651]: E0723 15:17:37.489346 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.942300 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:50 old-k8s-version-808561 kubelet[651]: E0723 15:17:50.400777 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.942487 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:50 old-k8s-version-808561 kubelet[651]: E0723 15:17:50.492734 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.942819 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:51 old-k8s-version-808561 kubelet[651]: E0723 15:17:51.896923 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.943009 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:02 old-k8s-version-808561 kubelet[651]: E0723 15:18:02.485438 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.943348 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:05 old-k8s-version-808561 kubelet[651]: E0723 15:18:05.484049 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.943560 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:16 old-k8s-version-808561 kubelet[651]: E0723 15:18:16.485295 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.943895 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:17 old-k8s-version-808561 kubelet[651]: E0723 15:18:17.484062 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.944227 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:29 old-k8s-version-808561 kubelet[651]: E0723 15:18:29.484090 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.944418 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:31 old-k8s-version-808561 kubelet[651]: E0723 15:18:31.484369 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.944750 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.487949 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.944936 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.488698 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.945299 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:53 old-k8s-version-808561 kubelet[651]: E0723 15:18:53.484799 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.945490 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:56 old-k8s-version-808561 kubelet[651]: E0723 15:18:56.487827 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.945675 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:07 old-k8s-version-808561 kubelet[651]: E0723 15:19:07.488216 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.946009 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: E0723 15:19:08.484416 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.946343 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: E0723 15:19:21.484079 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.946529 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.946860 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.947051 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0723 15:19:39.947061 3714598 logs.go:123] Gathering logs for kube-apiserver [3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02] ...
I0723 15:19:39.947075 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:19:40.015018 3714598 logs.go:123] Gathering logs for storage-provisioner [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2] ...
I0723 15:19:40.015060 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2"
I0723 15:19:40.079126 3714598 logs.go:123] Gathering logs for kindnet [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac] ...
I0723 15:19:40.079162 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac"
I0723 15:19:40.147283 3714598 logs.go:123] Gathering logs for kindnet [abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224] ...
I0723 15:19:40.147332 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:19:40.225304 3714598 logs.go:123] Gathering logs for dmesg ...
I0723 15:19:40.225339 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0723 15:19:40.248168 3714598 logs.go:123] Gathering logs for kube-apiserver [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b] ...
I0723 15:19:40.248199 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b"
I0723 15:19:40.337459 3714598 logs.go:123] Gathering logs for coredns [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7] ...
I0723 15:19:40.337496 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7"
I0723 15:19:40.395987 3714598 logs.go:123] Gathering logs for kube-scheduler [2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999] ...
I0723 15:19:40.396021 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:19:40.457141 3714598 logs.go:123] Gathering logs for kube-proxy [c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30] ...
I0723 15:19:40.457226 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:19:40.515290 3714598 logs.go:123] Gathering logs for kube-controller-manager [8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd] ...
I0723 15:19:40.515320 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:19:40.584717 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:40.584750 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0723 15:19:40.584820 3714598 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0723 15:19:40.584839 3714598 out.go:239] Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: E0723 15:19:08.484416 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: E0723 15:19:08.484416 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:40.584851 3714598 out.go:239] Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: E0723 15:19:21.484079 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: E0723 15:19:21.484079 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:40.584860 3714598 out.go:239] Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:40.585052 3714598 out.go:239] Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:40.585072 3714598 out.go:239] Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0723 15:19:40.585080 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:40.585096 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 15:19:50.586268 3714598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0723 15:19:50.601983 3714598 api_server.go:72] duration metric: took 5m54.077146224s to wait for apiserver process to appear ...
I0723 15:19:50.602012 3714598 api_server.go:88] waiting for apiserver healthz status ...
I0723 15:19:50.602052 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0723 15:19:50.602110 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0723 15:19:50.649752 3714598 cri.go:89] found id: "3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b"
I0723 15:19:50.649777 3714598 cri.go:89] found id: "3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:19:50.649782 3714598 cri.go:89] found id: ""
I0723 15:19:50.649789 3714598 logs.go:276] 2 containers: [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b 3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02]
I0723 15:19:50.649855 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.654322 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.658405 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0723 15:19:50.658522 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0723 15:19:50.713080 3714598 cri.go:89] found id: "e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69"
I0723 15:19:50.713103 3714598 cri.go:89] found id: "38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:19:50.713109 3714598 cri.go:89] found id: ""
I0723 15:19:50.713116 3714598 logs.go:276] 2 containers: [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69 38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad]
I0723 15:19:50.713179 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.717803 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.722153 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0723 15:19:50.722230 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0723 15:19:50.775333 3714598 cri.go:89] found id: "21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7"
I0723 15:19:50.775358 3714598 cri.go:89] found id: "9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:19:50.775362 3714598 cri.go:89] found id: ""
I0723 15:19:50.775369 3714598 logs.go:276] 2 containers: [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7 9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b]
I0723 15:19:50.775456 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.785807 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.790120 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0723 15:19:50.790191 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0723 15:19:50.853470 3714598 cri.go:89] found id: "2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a"
I0723 15:19:50.853495 3714598 cri.go:89] found id: "2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:19:50.853500 3714598 cri.go:89] found id: ""
I0723 15:19:50.853507 3714598 logs.go:276] 2 containers: [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a 2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999]
I0723 15:19:50.853565 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.857352 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.861304 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0723 15:19:50.861374 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0723 15:19:50.899012 3714598 cri.go:89] found id: "77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c"
I0723 15:19:50.899033 3714598 cri.go:89] found id: "c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:19:50.899037 3714598 cri.go:89] found id: ""
I0723 15:19:50.899044 3714598 logs.go:276] 2 containers: [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30]
I0723 15:19:50.899111 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.902779 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.906249 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0723 15:19:50.906320 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0723 15:19:50.944973 3714598 cri.go:89] found id: "04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876"
I0723 15:19:50.944997 3714598 cri.go:89] found id: "8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:19:50.945003 3714598 cri.go:89] found id: ""
I0723 15:19:50.945010 3714598 logs.go:276] 2 containers: [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876 8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd]
I0723 15:19:50.945067 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.948972 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.952796 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0723 15:19:50.952897 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0723 15:19:50.997448 3714598 cri.go:89] found id: "8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac"
I0723 15:19:50.997473 3714598 cri.go:89] found id: "abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:19:50.997478 3714598 cri.go:89] found id: ""
I0723 15:19:50.997484 3714598 logs.go:276] 2 containers: [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224]
I0723 15:19:50.997570 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.001317 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.007158 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0723 15:19:51.007302 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0723 15:19:51.053632 3714598 cri.go:89] found id: "c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8"
I0723 15:19:51.053712 3714598 cri.go:89] found id: ""
I0723 15:19:51.053737 3714598 logs.go:276] 1 containers: [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8]
I0723 15:19:51.053818 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.058478 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0723 15:19:51.058585 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0723 15:19:51.104046 3714598 cri.go:89] found id: "a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2"
I0723 15:19:51.104075 3714598 cri.go:89] found id: "9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6"
I0723 15:19:51.104081 3714598 cri.go:89] found id: ""
I0723 15:19:51.104088 3714598 logs.go:276] 2 containers: [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2 9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6]
I0723 15:19:51.104151 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.109019 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.113791 3714598 logs.go:123] Gathering logs for dmesg ...
I0723 15:19:51.113868 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0723 15:19:51.140653 3714598 logs.go:123] Gathering logs for describe nodes ...
I0723 15:19:51.140735 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0723 15:19:51.326563 3714598 logs.go:123] Gathering logs for kube-apiserver [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b] ...
I0723 15:19:51.326598 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b"
I0723 15:19:51.386022 3714598 logs.go:123] Gathering logs for kube-apiserver [3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02] ...
I0723 15:19:51.386056 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:19:51.437413 3714598 logs.go:123] Gathering logs for kube-proxy [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c] ...
I0723 15:19:51.437468 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c"
I0723 15:19:51.475222 3714598 logs.go:123] Gathering logs for kube-proxy [c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30] ...
I0723 15:19:51.475249 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:19:51.522639 3714598 logs.go:123] Gathering logs for kubernetes-dashboard [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8] ...
I0723 15:19:51.522667 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8"
I0723 15:19:51.562605 3714598 logs.go:123] Gathering logs for kubelet ...
I0723 15:19:51.562632 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0723 15:19:51.609723 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997695 651 reflector.go:138] object-"kube-system"/"metrics-server-token-555md": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-555md" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610001 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997781 651 reflector.go:138] object-"kube-system"/"storage-provisioner-token-52k6k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-52k6k" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610220 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997861 651 reflector.go:138] object-"kube-system"/"coredns-token-jjvjk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jjvjk" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610429 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997906 651 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610655 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.998074 651 reflector.go:138] object-"kube-system"/"kube-proxy-token-s27dt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s27dt" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610861 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.998109 651 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.611075 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:14.006437 651 reflector.go:138] object-"kube-system"/"kindnet-token-s2lzs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-s2lzs" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.611283 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:14.008948 651 reflector.go:138] object-"default"/"default-token-77r4d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-77r4d" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.622766 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:16 old-k8s-version-808561 kubelet[651]: E0723 15:14:16.784630 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.622961 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:17 old-k8s-version-808561 kubelet[651]: E0723 15:14:17.688046 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.625780 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:31 old-k8s-version-808561 kubelet[651]: E0723 15:14:31.509921 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.628051 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.489157 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.628581 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.834885 651 pod_workers.go:191] Error syncing pod 696d8a65-c479-4c8f-80f4-2d9b92600046 ("storage-provisioner_kube-system(696d8a65-c479-4c8f-80f4-2d9b92600046)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(696d8a65-c479-4c8f-80f4-2d9b92600046)"
W0723 15:19:51.629068 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.850656 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.629714 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:47 old-k8s-version-808561 kubelet[651]: E0723 15:14:47.855510 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.630137 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:51 old-k8s-version-808561 kubelet[651]: E0723 15:14:51.896987 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.633294 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:59 old-k8s-version-808561 kubelet[651]: E0723 15:14:59.501566 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.633904 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:06 old-k8s-version-808561 kubelet[651]: E0723 15:15:06.932141 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.634091 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:11 old-k8s-version-808561 kubelet[651]: E0723 15:15:11.484662 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.634421 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:11 old-k8s-version-808561 kubelet[651]: E0723 15:15:11.896849 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.634608 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:22 old-k8s-version-808561 kubelet[651]: E0723 15:15:22.484956 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.634992 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:23 old-k8s-version-808561 kubelet[651]: E0723 15:15:23.484104 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.635183 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:35 old-k8s-version-808561 kubelet[651]: E0723 15:15:35.487200 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.635780 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:37 old-k8s-version-808561 kubelet[651]: E0723 15:15:37.008061 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.636128 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:41 old-k8s-version-808561 kubelet[651]: E0723 15:15:41.896584 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.638602 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:49 old-k8s-version-808561 kubelet[651]: E0723 15:15:49.495729 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.638938 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:53 old-k8s-version-808561 kubelet[651]: E0723 15:15:53.484024 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.639126 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:04 old-k8s-version-808561 kubelet[651]: E0723 15:16:04.492638 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.639464 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:08 old-k8s-version-808561 kubelet[651]: E0723 15:16:08.489645 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.639655 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:19 old-k8s-version-808561 kubelet[651]: E0723 15:16:19.484440 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.640251 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:24 old-k8s-version-808561 kubelet[651]: E0723 15:16:24.156639 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.640610 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:31 old-k8s-version-808561 kubelet[651]: E0723 15:16:31.896663 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.640798 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:32 old-k8s-version-808561 kubelet[651]: E0723 15:16:32.484517 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.641132 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:44 old-k8s-version-808561 kubelet[651]: E0723 15:16:44.485094 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.641319 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:47 old-k8s-version-808561 kubelet[651]: E0723 15:16:47.484454 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.641650 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:57 old-k8s-version-808561 kubelet[651]: E0723 15:16:57.484249 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.641836 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:00 old-k8s-version-808561 kubelet[651]: E0723 15:17:00.498327 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.642170 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:09 old-k8s-version-808561 kubelet[651]: E0723 15:17:09.484011 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.644838 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:12 old-k8s-version-808561 kubelet[651]: E0723 15:17:12.496276 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.645183 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:24 old-k8s-version-808561 kubelet[651]: E0723 15:17:24.485118 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.645372 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:26 old-k8s-version-808561 kubelet[651]: E0723 15:17:26.486450 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.645705 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:37 old-k8s-version-808561 kubelet[651]: E0723 15:17:37.484898 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.645891 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:37 old-k8s-version-808561 kubelet[651]: E0723 15:17:37.489346 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.646523 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:50 old-k8s-version-808561 kubelet[651]: E0723 15:17:50.400777 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.646714 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:50 old-k8s-version-808561 kubelet[651]: E0723 15:17:50.492734 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.647054 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:51 old-k8s-version-808561 kubelet[651]: E0723 15:17:51.896923 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.647241 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:02 old-k8s-version-808561 kubelet[651]: E0723 15:18:02.485438 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.652403 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:05 old-k8s-version-808561 kubelet[651]: E0723 15:18:05.484049 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.652623 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:16 old-k8s-version-808561 kubelet[651]: E0723 15:18:16.485295 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.652993 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:17 old-k8s-version-808561 kubelet[651]: E0723 15:18:17.484062 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.653328 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:29 old-k8s-version-808561 kubelet[651]: E0723 15:18:29.484090 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.653517 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:31 old-k8s-version-808561 kubelet[651]: E0723 15:18:31.484369 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.653849 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.487949 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.654032 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.488698 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.654363 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:53 old-k8s-version-808561 kubelet[651]: E0723 15:18:53.484799 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.654549 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:56 old-k8s-version-808561 kubelet[651]: E0723 15:18:56.487827 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.654735 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:07 old-k8s-version-808561 kubelet[651]: E0723 15:19:07.488216 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.655066 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: E0723 15:19:08.484416 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.655405 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: E0723 15:19:21.484079 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.660007 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.660411 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.660612 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.660947 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:45 old-k8s-version-808561 kubelet[651]: E0723 15:19:45.484096 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.661134 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:50 old-k8s-version-808561 kubelet[651]: E0723 15:19:50.484391 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0723 15:19:51.661154 3714598 logs.go:123] Gathering logs for coredns [9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b] ...
I0723 15:19:51.661173 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:19:51.711411 3714598 logs.go:123] Gathering logs for kube-scheduler [2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999] ...
I0723 15:19:51.711442 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:19:51.772418 3714598 logs.go:123] Gathering logs for kube-controller-manager [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876] ...
I0723 15:19:51.772449 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876"
I0723 15:19:51.846910 3714598 logs.go:123] Gathering logs for containerd ...
I0723 15:19:51.846945 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0723 15:19:51.922227 3714598 logs.go:123] Gathering logs for container status ...
I0723 15:19:51.922272 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0723 15:19:51.972278 3714598 logs.go:123] Gathering logs for coredns [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7] ...
I0723 15:19:51.972362 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7"
I0723 15:19:52.013921 3714598 logs.go:123] Gathering logs for etcd [38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad] ...
I0723 15:19:52.013951 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:19:52.064248 3714598 logs.go:123] Gathering logs for kube-scheduler [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a] ...
I0723 15:19:52.064278 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a"
I0723 15:19:52.129820 3714598 logs.go:123] Gathering logs for kindnet [abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224] ...
I0723 15:19:52.129848 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:19:52.248551 3714598 logs.go:123] Gathering logs for storage-provisioner [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2] ...
I0723 15:19:52.248587 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2"
I0723 15:19:52.300611 3714598 logs.go:123] Gathering logs for storage-provisioner [9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6] ...
I0723 15:19:52.300639 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6"
I0723 15:19:52.341928 3714598 logs.go:123] Gathering logs for etcd [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69] ...
I0723 15:19:52.341960 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69"
I0723 15:19:52.383724 3714598 logs.go:123] Gathering logs for kindnet [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac] ...
I0723 15:19:52.383755 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac"
I0723 15:19:52.440533 3714598 logs.go:123] Gathering logs for kube-controller-manager [8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd] ...
I0723 15:19:52.440578 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:19:52.512691 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:52.512725 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0723 15:19:52.512855 3714598 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0723 15:19:52.512876 3714598 out.go:239] Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:52.512904 3714598 out.go:239] Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:52.512914 3714598 out.go:239] Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:52.512922 3714598 out.go:239] Jul 23 15:19:45 old-k8s-version-808561 kubelet[651]: E0723 15:19:45.484096 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:45 old-k8s-version-808561 kubelet[651]: E0723 15:19:45.484096 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:52.512932 3714598 out.go:239] Jul 23 15:19:50 old-k8s-version-808561 kubelet[651]: E0723 15:19:50.484391 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:50 old-k8s-version-808561 kubelet[651]: E0723 15:19:50.484391 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0723 15:19:52.512939 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:52.512947 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 15:20:02.514702 3714598 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
I0723 15:20:02.524878 3714598 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
ok
I0723 15:20:02.527579 3714598 out.go:177]
W0723 15:20:02.529480 3714598 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0723 15:20:02.529519 3714598 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0723 15:20:02.529537 3714598 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0723 15:20:02.529545 3714598 out.go:239] *
*
W0723 15:20:02.530507 3714598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0723 15:20:02.532802 3714598 out.go:177]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-808561 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-808561
helpers_test.go:235: (dbg) docker inspect old-k8s-version-808561:
-- stdout --
[
{
"Id": "831123147c411058d982f0692f5e0418542dbb60cce113fe15f16c7e20ae872c",
"Created": "2024-07-23T15:10:32.606019145Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 3714800,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-07-23T15:13:49.699577838Z",
"FinishedAt": "2024-07-23T15:13:48.662966285Z"
},
"Image": "sha256:71a7ac3dcc1f66f9b927c200bbaca5de093c77584a8e2cceb20f7c37b7028780",
"ResolvConfPath": "/var/lib/docker/containers/831123147c411058d982f0692f5e0418542dbb60cce113fe15f16c7e20ae872c/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/831123147c411058d982f0692f5e0418542dbb60cce113fe15f16c7e20ae872c/hostname",
"HostsPath": "/var/lib/docker/containers/831123147c411058d982f0692f5e0418542dbb60cce113fe15f16c7e20ae872c/hosts",
"LogPath": "/var/lib/docker/containers/831123147c411058d982f0692f5e0418542dbb60cce113fe15f16c7e20ae872c/831123147c411058d982f0692f5e0418542dbb60cce113fe15f16c7e20ae872c-json.log",
"Name": "/old-k8s-version-808561",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-808561:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-808561",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/56a2d2b3c22fef5ccc50402037c6c5f8993557e15dd265d68509e37354195c3f-init/diff:/var/lib/docker/overlay2/461d0b8919ac2353b7129812a8f1e79cefb2d1af514aa980073211c2a4674445/diff",
"MergedDir": "/var/lib/docker/overlay2/56a2d2b3c22fef5ccc50402037c6c5f8993557e15dd265d68509e37354195c3f/merged",
"UpperDir": "/var/lib/docker/overlay2/56a2d2b3c22fef5ccc50402037c6c5f8993557e15dd265d68509e37354195c3f/diff",
"WorkDir": "/var/lib/docker/overlay2/56a2d2b3c22fef5ccc50402037c6c5f8993557e15dd265d68509e37354195c3f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-808561",
"Source": "/var/lib/docker/volumes/old-k8s-version-808561/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-808561",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-808561",
"name.minikube.sigs.k8s.io": "old-k8s-version-808561",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "fef76e29ad638bc5a089b3e7283e5a19f9c4b95a8f662170c0dfed93e690177b",
"SandboxKey": "/var/run/docker/netns/fef76e29ad63",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "37471"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "37472"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "37475"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "37473"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "37474"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-808561": {
"IPAMConfig": {
"IPv4Address": "192.168.94.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:5e:02",
"DriverOpts": null,
"NetworkID": "a3943675bca3962b2d330d87c9fad0be324ce4ac2bf29b18b654500e231f0568",
"EndpointID": "8910e4aed149b108142c4ced964e7d20a25890ff8a4eb7c47072aba73728e629",
"Gateway": "192.168.94.1",
"IPAddress": "192.168.94.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-808561",
"831123147c41"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-808561 -n old-k8s-version-808561
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-808561 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-808561 logs -n 25: (2.000694336s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-480426 | cert-expiration-480426 | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:09 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-822622 | force-systemd-env-822622 | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:09 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-822622 | force-systemd-env-822622 | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:09 UTC |
| start | -p cert-options-688639 | cert-options-688639 | jenkins | v1.33.1 | 23 Jul 24 15:09 UTC | 23 Jul 24 15:10 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-688639 ssh | cert-options-688639 | jenkins | v1.33.1 | 23 Jul 24 15:10 UTC | 23 Jul 24 15:10 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-688639 -- sudo | cert-options-688639 | jenkins | v1.33.1 | 23 Jul 24 15:10 UTC | 23 Jul 24 15:10 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-688639 | cert-options-688639 | jenkins | v1.33.1 | 23 Jul 24 15:10 UTC | 23 Jul 24 15:10 UTC |
| start | -p old-k8s-version-808561 | old-k8s-version-808561 | jenkins | v1.33.1 | 23 Jul 24 15:10 UTC | 23 Jul 24 15:13 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-480426 | cert-expiration-480426 | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-480426 | cert-expiration-480426 | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:12 UTC |
| start | -p no-preload-942813 --memory=2200 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:12 UTC | 23 Jul 24 15:14 UTC |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| addons | enable metrics-server -p old-k8s-version-808561 | old-k8s-version-808561 | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-808561 | old-k8s-version-808561 | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-808561 | old-k8s-version-808561 | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | 23 Jul 24 15:13 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-808561 | old-k8s-version-808561 | jenkins | v1.33.1 | 23 Jul 24 15:13 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-942813 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:14 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-942813 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:14 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-942813 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:14 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-942813 --memory=2200 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:14 UTC | 23 Jul 24 15:18 UTC |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| image | no-preload-942813 image list | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:19 UTC | 23 Jul 24 15:19 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-942813 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:19 UTC | 23 Jul 24 15:19 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-942813 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:19 UTC | 23 Jul 24 15:19 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-942813 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:19 UTC | 23 Jul 24 15:19 UTC |
| delete | -p no-preload-942813 | no-preload-942813 | jenkins | v1.33.1 | 23 Jul 24 15:19 UTC | 23 Jul 24 15:19 UTC |
| start | -p embed-certs-845285 | embed-certs-845285 | jenkins | v1.33.1 | 23 Jul 24 15:19 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.30.3 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/23 15:19:10
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0723 15:19:10.413872 3724128 out.go:291] Setting OutFile to fd 1 ...
I0723 15:19:10.414060 3724128 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 15:19:10.414073 3724128 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:10.414080 3724128 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 15:19:10.414306 3724128 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19319-3501487/.minikube/bin
I0723 15:19:10.414832 3724128 out.go:298] Setting JSON to false
I0723 15:19:10.416151 3724128 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":86472,"bootTime":1721661479,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0723 15:19:10.416228 3724128 start.go:139] virtualization:
I0723 15:19:10.419337 3724128 out.go:177] * [embed-certs-845285] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0723 15:19:10.421942 3724128 out.go:177] - MINIKUBE_LOCATION=19319
I0723 15:19:10.422089 3724128 notify.go:220] Checking for updates...
I0723 15:19:10.426101 3724128 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0723 15:19:10.427984 3724128 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19319-3501487/kubeconfig
I0723 15:19:10.429979 3724128 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19319-3501487/.minikube
I0723 15:19:10.432092 3724128 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0723 15:19:10.434360 3724128 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0723 15:19:10.437292 3724128 config.go:182] Loaded profile config "old-k8s-version-808561": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0723 15:19:10.437405 3724128 driver.go:392] Setting default libvirt URI to qemu:///system
I0723 15:19:10.468427 3724128 docker.go:123] docker version: linux-27.1.0:Docker Engine - Community
I0723 15:19:10.468544 3724128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0723 15:19:10.525809 3724128 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:63 SystemTime:2024-07-23 15:19:10.516096504 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
I0723 15:19:10.525933 3724128 docker.go:307] overlay module found
I0723 15:19:10.528243 3724128 out.go:177] * Using the docker driver based on user configuration
I0723 15:19:10.529855 3724128 start.go:297] selected driver: docker
I0723 15:19:10.529880 3724128 start.go:901] validating driver "docker" against <nil>
I0723 15:19:10.529893 3724128 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0723 15:19:10.530536 3724128 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0723 15:19:10.601287 3724128 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:5 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:63 SystemTime:2024-07-23 15:19:10.592423144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.0]] Warnings:<nil>}}
I0723 15:19:10.601457 3724128 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0723 15:19:10.601687 3724128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0723 15:19:10.603890 3724128 out.go:177] * Using Docker driver with root privileges
I0723 15:19:10.605912 3724128 cni.go:84] Creating CNI manager for ""
I0723 15:19:10.605936 3724128 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0723 15:19:10.605948 3724128 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0723 15:19:10.606042 3724128 start.go:340] cluster config:
{Name:embed-certs-845285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-845285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0723 15:19:10.609399 3724128 out.go:177] * Starting "embed-certs-845285" primary control-plane node in "embed-certs-845285" cluster
I0723 15:19:10.611227 3724128 cache.go:121] Beginning downloading kic base image for docker with containerd
I0723 15:19:10.613373 3724128 out.go:177] * Pulling base image v0.0.44-1721687125-19319 ...
I0723 15:19:10.615174 3724128 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
I0723 15:19:10.615232 3724128 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19319-3501487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
I0723 15:19:10.615245 3724128 cache.go:56] Caching tarball of preloaded images
I0723 15:19:10.615324 3724128 preload.go:172] Found /home/jenkins/minikube-integration/19319-3501487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0723 15:19:10.615340 3724128 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
I0723 15:19:10.615449 3724128 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/config.json ...
I0723 15:19:10.615473 3724128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/config.json: {Name:mk4e23fedd78f8d5d8a5b0e59987610ca724f7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:10.615639 3724128 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local docker daemon
W0723 15:19:10.635623 3724128 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae is of wrong architecture
I0723 15:19:10.635646 3724128 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae to local cache
I0723 15:19:10.635727 3724128 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory
I0723 15:19:10.635748 3724128 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae in local cache directory, skipping pull
I0723 15:19:10.635758 3724128 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae exists in cache, skipping pull
I0723 15:19:10.635766 3724128 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae as a tarball
I0723 15:19:10.635772 3724128 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from local cache
I0723 15:19:10.832957 3724128 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae from cached tarball
I0723 15:19:10.832993 3724128 cache.go:194] Successfully downloaded all kic artifacts
I0723 15:19:10.833039 3724128 start.go:360] acquireMachinesLock for embed-certs-845285: {Name:mk216b9f21e8d73f4a5db741b3e5a95c130cb808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0723 15:19:10.833643 3724128 start.go:364] duration metric: took 577.613µs to acquireMachinesLock for "embed-certs-845285"
I0723 15:19:10.833690 3724128 start.go:93] Provisioning new machine with config: &{Name:embed-certs-845285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-845285 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0723 15:19:10.833775 3724128 start.go:125] createHost starting for "" (driver="docker")
I0723 15:19:10.872619 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:12.874756 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:10.836438 3724128 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0723 15:19:10.836762 3724128 start.go:159] libmachine.API.Create for "embed-certs-845285" (driver="docker")
I0723 15:19:10.836812 3724128 client.go:168] LocalClient.Create starting
I0723 15:19:10.836978 3724128 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem
I0723 15:19:10.837026 3724128 main.go:141] libmachine: Decoding PEM data...
I0723 15:19:10.837045 3724128 main.go:141] libmachine: Parsing certificate...
I0723 15:19:10.837102 3724128 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/cert.pem
I0723 15:19:10.837129 3724128 main.go:141] libmachine: Decoding PEM data...
I0723 15:19:10.837144 3724128 main.go:141] libmachine: Parsing certificate...
I0723 15:19:10.837645 3724128 cli_runner.go:164] Run: docker network inspect embed-certs-845285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0723 15:19:10.853254 3724128 cli_runner.go:211] docker network inspect embed-certs-845285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0723 15:19:10.853337 3724128 network_create.go:284] running [docker network inspect embed-certs-845285] to gather additional debugging logs...
I0723 15:19:10.853360 3724128 cli_runner.go:164] Run: docker network inspect embed-certs-845285
W0723 15:19:10.873887 3724128 cli_runner.go:211] docker network inspect embed-certs-845285 returned with exit code 1
I0723 15:19:10.873915 3724128 network_create.go:287] error running [docker network inspect embed-certs-845285]: docker network inspect embed-certs-845285: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-845285 not found
I0723 15:19:10.873929 3724128 network_create.go:289] output of [docker network inspect embed-certs-845285]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-845285 not found
** /stderr **
I0723 15:19:10.874019 3724128 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0723 15:19:10.889105 3724128 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-822da3215d85 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:20:7f:7c:aa} reservation:<nil>}
I0723 15:19:10.889470 3724128 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5740f289718d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:1a:b9:17:5f} reservation:<nil>}
I0723 15:19:10.890033 3724128 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fbb01a1edb33 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:b6:5a:09:37} reservation:<nil>}
I0723 15:19:10.890510 3724128 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d14da56f541a IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:f3:c4:95:0a} reservation:<nil>}
I0723 15:19:10.891107 3724128 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001960500}
I0723 15:19:10.891128 3724128 network_create.go:124] attempt to create docker network embed-certs-845285 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0723 15:19:10.891190 3724128 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-845285 embed-certs-845285
I0723 15:19:10.961871 3724128 network_create.go:108] docker network embed-certs-845285 192.168.85.0/24 created
I0723 15:19:10.961908 3724128 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-845285" container
I0723 15:19:10.962009 3724128 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0723 15:19:10.977735 3724128 cli_runner.go:164] Run: docker volume create embed-certs-845285 --label name.minikube.sigs.k8s.io=embed-certs-845285 --label created_by.minikube.sigs.k8s.io=true
I0723 15:19:10.994444 3724128 oci.go:103] Successfully created a docker volume embed-certs-845285
I0723 15:19:10.994554 3724128 cli_runner.go:164] Run: docker run --rm --name embed-certs-845285-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-845285 --entrypoint /usr/bin/test -v embed-certs-845285:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -d /var/lib
I0723 15:19:11.681223 3724128 oci.go:107] Successfully prepared a docker volume embed-certs-845285
I0723 15:19:11.681280 3724128 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
I0723 15:19:11.681303 3724128 kic.go:194] Starting extracting preloaded images to volume ...
I0723 15:19:11.681389 3724128 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19319-3501487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-845285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -I lz4 -xf /preloaded.tar -C /extractDir
I0723 15:19:15.392173 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:17.873922 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:17.165605 3724128 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19319-3501487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-845285:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae -I lz4 -xf /preloaded.tar -C /extractDir: (5.484175911s)
I0723 15:19:17.165639 3724128 kic.go:203] duration metric: took 5.484331645s to extract preloaded images to volume ...
W0723 15:19:17.165783 3724128 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0723 15:19:17.165906 3724128 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0723 15:19:17.226802 3724128 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-845285 --name embed-certs-845285 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-845285 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-845285 --network embed-certs-845285 --ip 192.168.85.2 --volume embed-certs-845285:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae
I0723 15:19:17.563372 3724128 cli_runner.go:164] Run: docker container inspect embed-certs-845285 --format={{.State.Running}}
I0723 15:19:17.582657 3724128 cli_runner.go:164] Run: docker container inspect embed-certs-845285 --format={{.State.Status}}
I0723 15:19:17.602567 3724128 cli_runner.go:164] Run: docker exec embed-certs-845285 stat /var/lib/dpkg/alternatives/iptables
I0723 15:19:17.666034 3724128 oci.go:144] the created container "embed-certs-845285" has a running status.
I0723 15:19:17.666065 3724128 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa...
I0723 15:19:17.978001 3724128 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0723 15:19:18.006952 3724128 cli_runner.go:164] Run: docker container inspect embed-certs-845285 --format={{.State.Status}}
I0723 15:19:18.042066 3724128 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0723 15:19:18.042093 3724128 kic_runner.go:114] Args: [docker exec --privileged embed-certs-845285 chown docker:docker /home/docker/.ssh/authorized_keys]
I0723 15:19:18.146806 3724128 cli_runner.go:164] Run: docker container inspect embed-certs-845285 --format={{.State.Status}}
I0723 15:19:18.176714 3724128 machine.go:94] provisionDockerMachine start ...
I0723 15:19:18.176824 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:18.208051 3724128 main.go:141] libmachine: Using SSH client type: native
I0723 15:19:18.208378 3724128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 37481 <nil> <nil>}
I0723 15:19:18.208388 3724128 main.go:141] libmachine: About to run SSH command:
hostname
I0723 15:19:18.209277 3724128 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38306->127.0.0.1:37481: read: connection reset by peer
I0723 15:19:21.339972 3724128 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-845285
I0723 15:19:21.339995 3724128 ubuntu.go:169] provisioning hostname "embed-certs-845285"
I0723 15:19:21.340079 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:21.356252 3724128 main.go:141] libmachine: Using SSH client type: native
I0723 15:19:21.356533 3724128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 37481 <nil> <nil>}
I0723 15:19:21.356549 3724128 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-845285 && echo "embed-certs-845285" | sudo tee /etc/hostname
I0723 15:19:21.502201 3724128 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-845285
I0723 15:19:21.502280 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:21.522472 3724128 main.go:141] libmachine: Using SSH client type: native
I0723 15:19:21.522730 3724128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil> [] 0s} 127.0.0.1 37481 <nil> <nil>}
I0723 15:19:21.522753 3724128 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-845285' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-845285/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-845285' | sudo tee -a /etc/hosts;
fi
fi
I0723 15:19:21.649367 3724128 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0723 15:19:21.649407 3724128 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19319-3501487/.minikube CaCertPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19319-3501487/.minikube}
I0723 15:19:21.649427 3724128 ubuntu.go:177] setting up certificates
I0723 15:19:21.649439 3724128 provision.go:84] configureAuth start
I0723 15:19:21.649500 3724128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-845285
I0723 15:19:21.666955 3724128 provision.go:143] copyHostCerts
I0723 15:19:21.667029 3724128 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3501487/.minikube/cert.pem, removing ...
I0723 15:19:21.667038 3724128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3501487/.minikube/cert.pem
I0723 15:19:21.667118 3724128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19319-3501487/.minikube/cert.pem (1123 bytes)
I0723 15:19:21.667203 3724128 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3501487/.minikube/key.pem, removing ...
I0723 15:19:21.667208 3724128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3501487/.minikube/key.pem
I0723 15:19:21.667231 3724128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19319-3501487/.minikube/key.pem (1679 bytes)
I0723 15:19:21.667287 3724128 exec_runner.go:144] found /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.pem, removing ...
I0723 15:19:21.667291 3724128 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.pem
I0723 15:19:21.667313 3724128 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.pem (1082 bytes)
I0723 15:19:21.667396 3724128 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca-key.pem org=jenkins.embed-certs-845285 san=[127.0.0.1 192.168.85.2 embed-certs-845285 localhost minikube]
I0723 15:19:22.085829 3724128 provision.go:177] copyRemoteCerts
I0723 15:19:22.085925 3724128 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0723 15:19:22.085989 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:22.104760 3724128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37481 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa Username:docker}
I0723 15:19:22.198879 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0723 15:19:22.231601 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0723 15:19:22.258457 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0723 15:19:22.286711 3724128 provision.go:87] duration metric: took 637.258908ms to configureAuth
I0723 15:19:22.286741 3724128 ubuntu.go:193] setting minikube options for container-runtime
I0723 15:19:22.286944 3724128 config.go:182] Loaded profile config "embed-certs-845285": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0723 15:19:22.286958 3724128 machine.go:97] duration metric: took 4.110223795s to provisionDockerMachine
I0723 15:19:22.286965 3724128 client.go:171] duration metric: took 11.450141327s to LocalClient.Create
I0723 15:19:22.286978 3724128 start.go:167] duration metric: took 11.450217461s to libmachine.API.Create "embed-certs-845285"
I0723 15:19:22.286988 3724128 start.go:293] postStartSetup for "embed-certs-845285" (driver="docker")
I0723 15:19:22.286997 3724128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0723 15:19:22.287056 3724128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0723 15:19:22.287099 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:22.303856 3724128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37481 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa Username:docker}
I0723 15:19:22.401637 3724128 ssh_runner.go:195] Run: cat /etc/os-release
I0723 15:19:22.404812 3724128 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0723 15:19:22.404846 3724128 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0723 15:19:22.404857 3724128 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0723 15:19:22.404864 3724128 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0723 15:19:22.404874 3724128 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3501487/.minikube/addons for local assets ...
I0723 15:19:22.404943 3724128 filesync.go:126] Scanning /home/jenkins/minikube-integration/19319-3501487/.minikube/files for local assets ...
I0723 15:19:22.405036 3724128 filesync.go:149] local asset: /home/jenkins/minikube-integration/19319-3501487/.minikube/files/etc/ssl/certs/35068982.pem -> 35068982.pem in /etc/ssl/certs
I0723 15:19:22.405150 3724128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0723 15:19:22.413813 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/files/etc/ssl/certs/35068982.pem --> /etc/ssl/certs/35068982.pem (1708 bytes)
I0723 15:19:22.440352 3724128 start.go:296] duration metric: took 153.348793ms for postStartSetup
I0723 15:19:22.440775 3724128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-845285
I0723 15:19:22.457130 3724128 profile.go:143] Saving config to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/config.json ...
I0723 15:19:22.457516 3724128 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0723 15:19:22.457593 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:22.475913 3724128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37481 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa Username:docker}
I0723 15:19:22.566010 3724128 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0723 15:19:22.570904 3724128 start.go:128] duration metric: took 11.73711284s to createHost
I0723 15:19:22.570938 3724128 start.go:83] releasing machines lock for "embed-certs-845285", held for 11.73727951s
I0723 15:19:22.571022 3724128 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-845285
I0723 15:19:22.590694 3724128 ssh_runner.go:195] Run: cat /version.json
I0723 15:19:22.590754 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:22.590990 3724128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0723 15:19:22.591048 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:22.607655 3724128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37481 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa Username:docker}
I0723 15:19:22.611119 3724128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37481 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa Username:docker}
I0723 15:19:22.699756 3724128 ssh_runner.go:195] Run: systemctl --version
I0723 15:19:22.830555 3724128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0723 15:19:22.835077 3724128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0723 15:19:22.861972 3724128 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0723 15:19:22.862107 3724128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0723 15:19:22.896177 3724128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0723 15:19:22.896246 3724128 start.go:495] detecting cgroup driver to use...
I0723 15:19:22.896319 3724128 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0723 15:19:22.896410 3724128 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0723 15:19:22.908980 3724128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0723 15:19:22.921176 3724128 docker.go:217] disabling cri-docker service (if available) ...
I0723 15:19:22.921261 3724128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0723 15:19:22.936083 3724128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0723 15:19:22.951608 3724128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0723 15:19:23.037213 3724128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0723 15:19:23.152828 3724128 docker.go:233] disabling docker service ...
I0723 15:19:23.152915 3724128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0723 15:19:23.180499 3724128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0723 15:19:23.193584 3724128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0723 15:19:23.298428 3724128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0723 15:19:23.394890 3724128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0723 15:19:23.407599 3724128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0723 15:19:23.428053 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0723 15:19:23.444126 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0723 15:19:23.460060 3724128 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0723 15:19:23.460178 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0723 15:19:23.473583 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0723 15:19:23.484893 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0723 15:19:23.494984 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0723 15:19:23.505103 3724128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0723 15:19:23.517402 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0723 15:19:23.530185 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0723 15:19:23.541736 3724128 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0723 15:19:23.552436 3724128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0723 15:19:23.561366 3724128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0723 15:19:23.571175 3724128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0723 15:19:23.669695 3724128 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0723 15:19:23.795447 3724128 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0723 15:19:23.795541 3724128 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0723 15:19:23.799283 3724128 start.go:563] Will wait 60s for crictl version
I0723 15:19:23.799377 3724128 ssh_runner.go:195] Run: which crictl
I0723 15:19:23.802678 3724128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0723 15:19:23.848878 3724128 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.19
RuntimeApiVersion: v1
I0723 15:19:23.848987 3724128 ssh_runner.go:195] Run: containerd --version
I0723 15:19:23.874508 3724128 ssh_runner.go:195] Run: containerd --version
I0723 15:19:23.898963 3724128 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
I0723 15:19:20.370392 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:22.371785 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:23.901094 3724128 cli_runner.go:164] Run: docker network inspect embed-certs-845285 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0723 15:19:23.916218 3724128 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0723 15:19:23.920265 3724128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0723 15:19:23.931814 3724128 kubeadm.go:883] updating cluster {Name:embed-certs-845285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-845285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0723 15:19:23.931931 3724128 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
I0723 15:19:23.931995 3724128 ssh_runner.go:195] Run: sudo crictl images --output json
I0723 15:19:23.974312 3724128 containerd.go:627] all images are preloaded for containerd runtime.
I0723 15:19:23.974335 3724128 containerd.go:534] Images already preloaded, skipping extraction
I0723 15:19:23.974396 3724128 ssh_runner.go:195] Run: sudo crictl images --output json
I0723 15:19:24.018972 3724128 containerd.go:627] all images are preloaded for containerd runtime.
I0723 15:19:24.018998 3724128 cache_images.go:84] Images are preloaded, skipping loading
I0723 15:19:24.019006 3724128 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.30.3 containerd true true} ...
I0723 15:19:24.019116 3724128 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-845285 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.30.3 ClusterName:embed-certs-845285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0723 15:19:24.019199 3724128 ssh_runner.go:195] Run: sudo crictl info
I0723 15:19:24.059497 3724128 cni.go:84] Creating CNI manager for ""
I0723 15:19:24.059523 3724128 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0723 15:19:24.059582 3724128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0723 15:19:24.059615 3724128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-845285 NodeName:embed-certs-845285 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0723 15:19:24.059763 3724128 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-845285"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0723 15:19:24.059857 3724128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
I0723 15:19:24.069680 3724128 binaries.go:44] Found k8s binaries, skipping transfer
I0723 15:19:24.069774 3724128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0723 15:19:24.079784 3724128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0723 15:19:24.099260 3724128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0723 15:19:24.119921 3724128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
I0723 15:19:24.140393 3724128 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0723 15:19:24.143881 3724128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0723 15:19:24.155218 3724128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0723 15:19:24.243763 3724128 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0723 15:19:24.258430 3724128 certs.go:68] Setting up /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285 for IP: 192.168.85.2
I0723 15:19:24.258453 3724128 certs.go:194] generating shared ca certs ...
I0723 15:19:24.258469 3724128 certs.go:226] acquiring lock for ca certs: {Name:mke9a16e2fca4d99d18822e41138928c0b1feaa4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:24.258608 3724128 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.key
I0723 15:19:24.258656 3724128 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/proxy-client-ca.key
I0723 15:19:24.258668 3724128 certs.go:256] generating profile certs ...
I0723 15:19:24.258724 3724128 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/client.key
I0723 15:19:24.258751 3724128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/client.crt with IP's: []
I0723 15:19:24.532558 3724128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/client.crt ...
I0723 15:19:24.532588 3724128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/client.crt: {Name:mk7b58df2157d20758bc446a486eb5143eaf10d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:24.533342 3724128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/client.key ...
I0723 15:19:24.533362 3724128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/client.key: {Name:mk84fabd0a5cd01ca00e98257394ec70f52fe464 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:24.533873 3724128 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.key.7e08670e
I0723 15:19:24.533895 3724128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.crt.7e08670e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0723 15:19:25.408969 3724128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.crt.7e08670e ...
I0723 15:19:25.409004 3724128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.crt.7e08670e: {Name:mke22e6b10d2622b27353c9c235a4032d620379c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:25.409607 3724128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.key.7e08670e ...
I0723 15:19:25.409630 3724128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.key.7e08670e: {Name:mk10a8c1cf34f12e6f22d3e9b7cdd67b59e54aa3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:25.410184 3724128 certs.go:381] copying /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.crt.7e08670e -> /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.crt
I0723 15:19:25.410333 3724128 certs.go:385] copying /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.key.7e08670e -> /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.key
I0723 15:19:25.410405 3724128 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/proxy-client.key
I0723 15:19:25.410427 3724128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/proxy-client.crt with IP's: []
I0723 15:19:24.373992 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:26.408441 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:28.873586 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:25.527814 3724128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/proxy-client.crt ...
I0723 15:19:25.527847 3724128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/proxy-client.crt: {Name:mkd5fe1c11b2599e41bb83ec3bf1f5d533eca82d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:25.528047 3724128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/proxy-client.key ...
I0723 15:19:25.528063 3724128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/proxy-client.key: {Name:mk9abfab65604002a76b4719ed1e2e2e6f34bd83 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:25.529395 3724128 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/3506898.pem (1338 bytes)
W0723 15:19:25.529445 3724128 certs.go:480] ignoring /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/3506898_empty.pem, impossibly tiny 0 bytes
I0723 15:19:25.529460 3724128 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca-key.pem (1679 bytes)
I0723 15:19:25.529483 3724128 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/ca.pem (1082 bytes)
I0723 15:19:25.529517 3724128 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/cert.pem (1123 bytes)
I0723 15:19:25.529544 3724128 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/key.pem (1679 bytes)
I0723 15:19:25.529601 3724128 certs.go:484] found cert: /home/jenkins/minikube-integration/19319-3501487/.minikube/files/etc/ssl/certs/35068982.pem (1708 bytes)
I0723 15:19:25.530225 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0723 15:19:25.558218 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0723 15:19:25.584212 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0723 15:19:25.609582 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0723 15:19:25.636056 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0723 15:19:25.663480 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0723 15:19:25.692966 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0723 15:19:25.722885 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/profiles/embed-certs-845285/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0723 15:19:25.759987 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0723 15:19:25.793190 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/certs/3506898.pem --> /usr/share/ca-certificates/3506898.pem (1338 bytes)
I0723 15:19:25.820039 3724128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19319-3501487/.minikube/files/etc/ssl/certs/35068982.pem --> /usr/share/ca-certificates/35068982.pem (1708 bytes)
I0723 15:19:25.848522 3724128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0723 15:19:25.874023 3724128 ssh_runner.go:195] Run: openssl version
I0723 15:19:25.880286 3724128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0723 15:19:25.890432 3724128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0723 15:19:25.894130 3724128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 23 14:25 /usr/share/ca-certificates/minikubeCA.pem
I0723 15:19:25.894222 3724128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0723 15:19:25.901519 3724128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0723 15:19:25.910984 3724128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3506898.pem && ln -fs /usr/share/ca-certificates/3506898.pem /etc/ssl/certs/3506898.pem"
I0723 15:19:25.920451 3724128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3506898.pem
I0723 15:19:25.924063 3724128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 23 14:33 /usr/share/ca-certificates/3506898.pem
I0723 15:19:25.924162 3724128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3506898.pem
I0723 15:19:25.931593 3724128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/3506898.pem /etc/ssl/certs/51391683.0"
I0723 15:19:25.941430 3724128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/35068982.pem && ln -fs /usr/share/ca-certificates/35068982.pem /etc/ssl/certs/35068982.pem"
I0723 15:19:25.951230 3724128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/35068982.pem
I0723 15:19:25.954870 3724128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 23 14:33 /usr/share/ca-certificates/35068982.pem
I0723 15:19:25.954942 3724128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/35068982.pem
I0723 15:19:25.962047 3724128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/35068982.pem /etc/ssl/certs/3ec20f2e.0"
I0723 15:19:25.971986 3724128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0723 15:19:25.975583 3724128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0723 15:19:25.975639 3724128 kubeadm.go:392] StartCluster: {Name:embed-certs-845285 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721687125-19319@sha256:8e08c9232f6be53a68fac8937bc5a03a99e38d6a1ff7b6ea4048c989041004ae Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-845285 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0723 15:19:25.975724 3724128 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0723 15:19:25.975790 3724128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0723 15:19:26.015122 3724128 cri.go:89] found id: ""
I0723 15:19:26.015268 3724128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0723 15:19:26.025882 3724128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0723 15:19:26.036420 3724128 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0723 15:19:26.036532 3724128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0723 15:19:26.046337 3724128 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0723 15:19:26.046355 3724128 kubeadm.go:157] found existing configuration files:
I0723 15:19:26.046421 3724128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0723 15:19:26.055814 3724128 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0723 15:19:26.055910 3724128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0723 15:19:26.065147 3724128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0723 15:19:26.074778 3724128 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0723 15:19:26.074847 3724128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0723 15:19:26.083887 3724128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0723 15:19:26.093580 3724128 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0723 15:19:26.093651 3724128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0723 15:19:26.103247 3724128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0723 15:19:26.112026 3724128 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0723 15:19:26.112119 3724128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0723 15:19:26.120957 3724128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0723 15:19:26.228909 3724128 kubeadm.go:310] [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1065-aws\n", err: exit status 1
I0723 15:19:26.312388 3724128 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0723 15:19:31.372498 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:33.872284 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:36.371249 3714598 pod_ready.go:102] pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace has status "Ready":"False"
I0723 15:19:37.871829 3714598 pod_ready.go:81] duration metric: took 4m0.006918868s for pod "metrics-server-9975d5f86-mrdtz" in "kube-system" namespace to be "Ready" ...
E0723 15:19:37.871864 3714598 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0723 15:19:37.871873 3714598 pod_ready.go:38] duration metric: took 5m23.886727323s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0723 15:19:37.871886 3714598 api_server.go:52] waiting for apiserver process to appear ...
I0723 15:19:37.871916 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0723 15:19:37.871982 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0723 15:19:37.944728 3714598 cri.go:89] found id: "3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b"
I0723 15:19:37.944753 3714598 cri.go:89] found id: "3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:19:37.944758 3714598 cri.go:89] found id: ""
I0723 15:19:37.944765 3714598 logs.go:276] 2 containers: [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b 3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02]
I0723 15:19:37.944824 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:37.956790 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:37.960739 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0723 15:19:37.960812 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0723 15:19:38.023180 3714598 cri.go:89] found id: "e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69"
I0723 15:19:38.023207 3714598 cri.go:89] found id: "38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:19:38.023212 3714598 cri.go:89] found id: ""
I0723 15:19:38.023220 3714598 logs.go:276] 2 containers: [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69 38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad]
I0723 15:19:38.023287 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.028272 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.032944 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0723 15:19:38.033029 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0723 15:19:38.108278 3714598 cri.go:89] found id: "21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7"
I0723 15:19:38.108321 3714598 cri.go:89] found id: "9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:19:38.108327 3714598 cri.go:89] found id: ""
I0723 15:19:38.108335 3714598 logs.go:276] 2 containers: [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7 9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b]
I0723 15:19:38.108394 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.116642 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.121037 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0723 15:19:38.121113 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0723 15:19:38.200574 3714598 cri.go:89] found id: "2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a"
I0723 15:19:38.200599 3714598 cri.go:89] found id: "2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:19:38.200604 3714598 cri.go:89] found id: ""
I0723 15:19:38.200612 3714598 logs.go:276] 2 containers: [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a 2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999]
I0723 15:19:38.200669 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.206112 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.230286 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0723 15:19:38.230410 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0723 15:19:38.324505 3714598 cri.go:89] found id: "77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c"
I0723 15:19:38.324529 3714598 cri.go:89] found id: "c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:19:38.324534 3714598 cri.go:89] found id: ""
I0723 15:19:38.324542 3714598 logs.go:276] 2 containers: [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30]
I0723 15:19:38.324602 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.330443 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.334064 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0723 15:19:38.334147 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0723 15:19:38.392951 3714598 cri.go:89] found id: "04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876"
I0723 15:19:38.393029 3714598 cri.go:89] found id: "8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:19:38.393047 3714598 cri.go:89] found id: ""
I0723 15:19:38.393086 3714598 logs.go:276] 2 containers: [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876 8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd]
I0723 15:19:38.393173 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.400994 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.406747 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0723 15:19:38.406869 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0723 15:19:38.512283 3714598 cri.go:89] found id: "8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac"
I0723 15:19:38.512364 3714598 cri.go:89] found id: "abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:19:38.512382 3714598 cri.go:89] found id: ""
I0723 15:19:38.512402 3714598 logs.go:276] 2 containers: [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224]
I0723 15:19:38.512485 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.520453 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.524098 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0723 15:19:38.524214 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0723 15:19:38.601870 3714598 cri.go:89] found id: "c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8"
I0723 15:19:38.601940 3714598 cri.go:89] found id: ""
I0723 15:19:38.601960 3714598 logs.go:276] 1 containers: [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8]
I0723 15:19:38.602086 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.607099 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0723 15:19:38.607217 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0723 15:19:38.713066 3714598 cri.go:89] found id: "a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2"
I0723 15:19:38.713092 3714598 cri.go:89] found id: "9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6"
I0723 15:19:38.713097 3714598 cri.go:89] found id: ""
I0723 15:19:38.713105 3714598 logs.go:276] 2 containers: [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2 9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6]
I0723 15:19:38.713175 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.717141 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:38.721751 3714598 logs.go:123] Gathering logs for kube-proxy [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c] ...
I0723 15:19:38.721777 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c"
I0723 15:19:38.797009 3714598 logs.go:123] Gathering logs for kubernetes-dashboard [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8] ...
I0723 15:19:38.797086 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8"
I0723 15:19:38.861327 3714598 logs.go:123] Gathering logs for storage-provisioner [9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6] ...
I0723 15:19:38.861398 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6"
I0723 15:19:38.911530 3714598 logs.go:123] Gathering logs for containerd ...
I0723 15:19:38.911600 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0723 15:19:38.978921 3714598 logs.go:123] Gathering logs for container status ...
I0723 15:19:38.979018 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0723 15:19:39.058845 3714598 logs.go:123] Gathering logs for describe nodes ...
I0723 15:19:39.059112 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0723 15:19:42.544052 3724128 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
I0723 15:19:42.544112 3724128 kubeadm.go:310] [preflight] Running pre-flight checks
I0723 15:19:42.544230 3724128 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0723 15:19:42.544353 3724128 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1065-aws[0m
I0723 15:19:42.544389 3724128 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0723 15:19:42.544433 3724128 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0723 15:19:42.544481 3724128 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0723 15:19:42.544527 3724128 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0723 15:19:42.544573 3724128 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0723 15:19:42.544629 3724128 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0723 15:19:42.544678 3724128 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0723 15:19:42.544722 3724128 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0723 15:19:42.544774 3724128 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0723 15:19:42.544819 3724128 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0723 15:19:42.544888 3724128 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0723 15:19:42.544978 3724128 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0723 15:19:42.545067 3724128 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0723 15:19:42.545131 3724128 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0723 15:19:42.547542 3724128 out.go:204] - Generating certificates and keys ...
I0723 15:19:42.547646 3724128 kubeadm.go:310] [certs] Using existing ca certificate authority
I0723 15:19:42.547710 3724128 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0723 15:19:42.547774 3724128 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0723 15:19:42.547828 3724128 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0723 15:19:42.547886 3724128 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0723 15:19:42.547934 3724128 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0723 15:19:42.547986 3724128 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0723 15:19:42.548112 3724128 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-845285 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0723 15:19:42.548164 3724128 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0723 15:19:42.548279 3724128 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-845285 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0723 15:19:42.548422 3724128 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0723 15:19:42.548526 3724128 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0723 15:19:42.548578 3724128 kubeadm.go:310] [certs] Generating "sa" key and public key
I0723 15:19:42.548635 3724128 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0723 15:19:42.548735 3724128 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0723 15:19:42.548805 3724128 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0723 15:19:42.548867 3724128 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0723 15:19:42.548934 3724128 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0723 15:19:42.548989 3724128 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0723 15:19:42.549083 3724128 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0723 15:19:42.549152 3724128 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0723 15:19:42.551519 3724128 out.go:204] - Booting up control plane ...
I0723 15:19:42.551629 3724128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0723 15:19:42.551728 3724128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0723 15:19:42.551807 3724128 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0723 15:19:42.551934 3724128 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0723 15:19:42.552038 3724128 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0723 15:19:42.552083 3724128 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0723 15:19:42.552255 3724128 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0723 15:19:42.552412 3724128 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0723 15:19:42.552503 3724128 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.502720819s
I0723 15:19:42.552591 3724128 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0723 15:19:42.552662 3724128 kubeadm.go:310] [api-check] The API server is healthy after 7.00143685s
I0723 15:19:42.552767 3724128 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0723 15:19:42.552890 3724128 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0723 15:19:42.552952 3724128 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
I0723 15:19:42.553134 3724128 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-845285 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
I0723 15:19:42.553191 3724128 kubeadm.go:310] [bootstrap-token] Using token: s9hdmq.6v14ns5nf6lh2yt3
I0723 15:19:42.555856 3724128 out.go:204] - Configuring RBAC rules ...
I0723 15:19:42.555987 3724128 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
I0723 15:19:42.556089 3724128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
I0723 15:19:42.556256 3724128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
I0723 15:19:42.556440 3724128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
I0723 15:19:42.556563 3724128 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
I0723 15:19:42.556649 3724128 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0723 15:19:42.556800 3724128 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0723 15:19:42.556862 3724128 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
I0723 15:19:42.556920 3724128 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
I0723 15:19:42.556959 3724128 kubeadm.go:310]
I0723 15:19:42.557040 3724128 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
I0723 15:19:42.557050 3724128 kubeadm.go:310]
I0723 15:19:42.557140 3724128 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
I0723 15:19:42.557148 3724128 kubeadm.go:310]
I0723 15:19:42.557173 3724128 kubeadm.go:310] mkdir -p $HOME/.kube
I0723 15:19:42.557241 3724128 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
I0723 15:19:42.557309 3724128 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config
I0723 15:19:42.557321 3724128 kubeadm.go:310]
I0723 15:19:42.557388 3724128 kubeadm.go:310] Alternatively, if you are the root user, you can run:
I0723 15:19:42.557396 3724128 kubeadm.go:310]
I0723 15:19:42.557453 3724128 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf
I0723 15:19:42.557464 3724128 kubeadm.go:310]
I0723 15:19:42.557522 3724128 kubeadm.go:310] You should now deploy a pod network to the cluster.
I0723 15:19:42.557598 3724128 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
I0723 15:19:42.557686 3724128 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/
I0723 15:19:42.557695 3724128 kubeadm.go:310]
I0723 15:19:42.557799 3724128 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
I0723 15:19:42.557917 3724128 kubeadm.go:310] and service account keys on each node and then running the following as root:
I0723 15:19:42.557928 3724128 kubeadm.go:310]
I0723 15:19:42.558011 3724128 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s9hdmq.6v14ns5nf6lh2yt3 \
I0723 15:19:42.558120 3724128 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:719759b2eeeee758bfc7fcd3a21492aa1ab75cb7c22c2fa0ec890f4ab49c0da6 \
I0723 15:19:42.558147 3724128 kubeadm.go:310] --control-plane
I0723 15:19:42.558157 3724128 kubeadm.go:310]
I0723 15:19:42.558267 3724128 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
I0723 15:19:42.558279 3724128 kubeadm.go:310]
I0723 15:19:42.558362 3724128 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s9hdmq.6v14ns5nf6lh2yt3 \
I0723 15:19:42.558488 3724128 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:719759b2eeeee758bfc7fcd3a21492aa1ab75cb7c22c2fa0ec890f4ab49c0da6
I0723 15:19:42.558503 3724128 cni.go:84] Creating CNI manager for ""
I0723 15:19:42.558511 3724128 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0723 15:19:42.560808 3724128 out.go:177] * Configuring CNI (Container Networking Interface) ...
I0723 15:19:39.386817 3714598 logs.go:123] Gathering logs for etcd [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69] ...
I0723 15:19:39.386849 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69"
I0723 15:19:39.461694 3714598 logs.go:123] Gathering logs for etcd [38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad] ...
I0723 15:19:39.461728 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:19:39.550639 3714598 logs.go:123] Gathering logs for coredns [9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b] ...
I0723 15:19:39.550672 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:19:39.685274 3714598 logs.go:123] Gathering logs for kube-scheduler [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a] ...
I0723 15:19:39.685302 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a"
I0723 15:19:39.767623 3714598 logs.go:123] Gathering logs for kube-controller-manager [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876] ...
I0723 15:19:39.767658 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876"
I0723 15:19:39.848209 3714598 logs.go:123] Gathering logs for kubelet ...
I0723 15:19:39.848249 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0723 15:19:39.905588 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997695 651 reflector.go:138] object-"kube-system"/"metrics-server-token-555md": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-555md" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.905840 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997781 651 reflector.go:138] object-"kube-system"/"storage-provisioner-token-52k6k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-52k6k" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906056 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997861 651 reflector.go:138] object-"kube-system"/"coredns-token-jjvjk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jjvjk" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906259 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997906 651 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906478 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.998074 651 reflector.go:138] object-"kube-system"/"kube-proxy-token-s27dt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s27dt" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906685 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.998109 651 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.906896 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:14.006437 651 reflector.go:138] object-"kube-system"/"kindnet-token-s2lzs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-s2lzs" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.907120 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:14.008948 651 reflector.go:138] object-"default"/"default-token-77r4d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-77r4d" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:39.918723 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:16 old-k8s-version-808561 kubelet[651]: E0723 15:14:16.784630 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.918922 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:17 old-k8s-version-808561 kubelet[651]: E0723 15:14:17.688046 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.921852 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:31 old-k8s-version-808561 kubelet[651]: E0723 15:14:31.509921 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.924131 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.489157 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.924611 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.834885 651 pod_workers.go:191] Error syncing pod 696d8a65-c479-4c8f-80f4-2d9b92600046 ("storage-provisioner_kube-system(696d8a65-c479-4c8f-80f4-2d9b92600046)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(696d8a65-c479-4c8f-80f4-2d9b92600046)"
W0723 15:19:39.924951 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.850656 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.925451 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:47 old-k8s-version-808561 kubelet[651]: E0723 15:14:47.855510 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.925788 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:51 old-k8s-version-808561 kubelet[651]: E0723 15:14:51.896987 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.928719 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:59 old-k8s-version-808561 kubelet[651]: E0723 15:14:59.501566 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.929323 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:06 old-k8s-version-808561 kubelet[651]: E0723 15:15:06.932141 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.929512 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:11 old-k8s-version-808561 kubelet[651]: E0723 15:15:11.484662 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.929843 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:11 old-k8s-version-808561 kubelet[651]: E0723 15:15:11.896849 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.930030 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:22 old-k8s-version-808561 kubelet[651]: E0723 15:15:22.484956 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.930365 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:23 old-k8s-version-808561 kubelet[651]: E0723 15:15:23.484104 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.930551 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:35 old-k8s-version-808561 kubelet[651]: E0723 15:15:35.487200 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.931143 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:37 old-k8s-version-808561 kubelet[651]: E0723 15:15:37.008061 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.931481 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:41 old-k8s-version-808561 kubelet[651]: E0723 15:15:41.896584 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.934021 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:49 old-k8s-version-808561 kubelet[651]: E0723 15:15:49.495729 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.934361 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:53 old-k8s-version-808561 kubelet[651]: E0723 15:15:53.484024 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.934550 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:04 old-k8s-version-808561 kubelet[651]: E0723 15:16:04.492638 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.934880 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:08 old-k8s-version-808561 kubelet[651]: E0723 15:16:08.489645 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.935070 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:19 old-k8s-version-808561 kubelet[651]: E0723 15:16:19.484440 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.935675 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:24 old-k8s-version-808561 kubelet[651]: E0723 15:16:24.156639 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.936007 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:31 old-k8s-version-808561 kubelet[651]: E0723 15:16:31.896663 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.936193 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:32 old-k8s-version-808561 kubelet[651]: E0723 15:16:32.484517 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.936563 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:44 old-k8s-version-808561 kubelet[651]: E0723 15:16:44.485094 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.936754 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:47 old-k8s-version-808561 kubelet[651]: E0723 15:16:47.484454 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.937089 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:57 old-k8s-version-808561 kubelet[651]: E0723 15:16:57.484249 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.937275 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:00 old-k8s-version-808561 kubelet[651]: E0723 15:17:00.498327 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.937608 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:09 old-k8s-version-808561 kubelet[651]: E0723 15:17:09.484011 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.940643 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:12 old-k8s-version-808561 kubelet[651]: E0723 15:17:12.496276 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:39.941002 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:24 old-k8s-version-808561 kubelet[651]: E0723 15:17:24.485118 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.941191 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:26 old-k8s-version-808561 kubelet[651]: E0723 15:17:26.486450 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.941525 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:37 old-k8s-version-808561 kubelet[651]: E0723 15:17:37.484898 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.941710 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:37 old-k8s-version-808561 kubelet[651]: E0723 15:17:37.489346 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.942300 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:50 old-k8s-version-808561 kubelet[651]: E0723 15:17:50.400777 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.942487 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:50 old-k8s-version-808561 kubelet[651]: E0723 15:17:50.492734 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.942819 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:51 old-k8s-version-808561 kubelet[651]: E0723 15:17:51.896923 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.943009 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:02 old-k8s-version-808561 kubelet[651]: E0723 15:18:02.485438 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.943348 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:05 old-k8s-version-808561 kubelet[651]: E0723 15:18:05.484049 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.943560 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:16 old-k8s-version-808561 kubelet[651]: E0723 15:18:16.485295 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.943895 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:17 old-k8s-version-808561 kubelet[651]: E0723 15:18:17.484062 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.944227 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:29 old-k8s-version-808561 kubelet[651]: E0723 15:18:29.484090 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.944418 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:31 old-k8s-version-808561 kubelet[651]: E0723 15:18:31.484369 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.944750 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.487949 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.944936 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.488698 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.945299 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:53 old-k8s-version-808561 kubelet[651]: E0723 15:18:53.484799 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.945490 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:56 old-k8s-version-808561 kubelet[651]: E0723 15:18:56.487827 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.945675 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:07 old-k8s-version-808561 kubelet[651]: E0723 15:19:07.488216 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.946009 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: E0723 15:19:08.484416 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.946343 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: E0723 15:19:21.484079 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.946529 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:39.946860 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:39.947051 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0723 15:19:39.947061 3714598 logs.go:123] Gathering logs for kube-apiserver [3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02] ...
I0723 15:19:39.947075 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:19:40.015018 3714598 logs.go:123] Gathering logs for storage-provisioner [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2] ...
I0723 15:19:40.015060 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2"
I0723 15:19:40.079126 3714598 logs.go:123] Gathering logs for kindnet [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac] ...
I0723 15:19:40.079162 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac"
I0723 15:19:40.147283 3714598 logs.go:123] Gathering logs for kindnet [abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224] ...
I0723 15:19:40.147332 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:19:40.225304 3714598 logs.go:123] Gathering logs for dmesg ...
I0723 15:19:40.225339 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0723 15:19:40.248168 3714598 logs.go:123] Gathering logs for kube-apiserver [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b] ...
I0723 15:19:40.248199 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b"
I0723 15:19:40.337459 3714598 logs.go:123] Gathering logs for coredns [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7] ...
I0723 15:19:40.337496 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7"
I0723 15:19:40.395987 3714598 logs.go:123] Gathering logs for kube-scheduler [2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999] ...
I0723 15:19:40.396021 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:19:40.457141 3714598 logs.go:123] Gathering logs for kube-proxy [c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30] ...
I0723 15:19:40.457226 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:19:40.515290 3714598 logs.go:123] Gathering logs for kube-controller-manager [8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd] ...
I0723 15:19:40.515320 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:19:40.584717 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:40.584750 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0723 15:19:40.584820 3714598 out.go:239] X Problems detected in kubelet:
W0723 15:19:40.584839 3714598 out.go:239] Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: E0723 15:19:08.484416 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:40.584851 3714598 out.go:239] Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: E0723 15:19:21.484079 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:40.584860 3714598 out.go:239] Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:40.585052 3714598 out.go:239] Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:40.585072 3714598 out.go:239] Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0723 15:19:40.585080 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:40.585096 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 15:19:42.563427 3724128 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0723 15:19:42.567897 3724128 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
I0723 15:19:42.567916 3724128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
I0723 15:19:42.589840 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0723 15:19:42.894125 3724128 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0723 15:19:42.894291 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:42.894399 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-845285 minikube.k8s.io/updated_at=2024_07_23T15_19_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6 minikube.k8s.io/name=embed-certs-845285 minikube.k8s.io/primary=true
I0723 15:19:43.128379 3724128 ops.go:34] apiserver oom_adj: -16
I0723 15:19:43.128468 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:43.629261 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:44.129264 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:44.628631 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:45.128975 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:45.629280 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:46.129354 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:46.628535 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:47.128661 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:47.629082 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:48.129436 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:48.628609 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:49.129171 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:49.629152 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:50.129430 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:50.586268 3714598 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0723 15:19:50.601983 3714598 api_server.go:72] duration metric: took 5m54.077146224s to wait for apiserver process to appear ...
I0723 15:19:50.602012 3714598 api_server.go:88] waiting for apiserver healthz status ...
I0723 15:19:50.602052 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0723 15:19:50.602110 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0723 15:19:50.649752 3714598 cri.go:89] found id: "3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b"
I0723 15:19:50.649777 3714598 cri.go:89] found id: "3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:19:50.649782 3714598 cri.go:89] found id: ""
I0723 15:19:50.649789 3714598 logs.go:276] 2 containers: [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b 3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02]
I0723 15:19:50.649855 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.654322 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.658405 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0723 15:19:50.658522 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0723 15:19:50.713080 3714598 cri.go:89] found id: "e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69"
I0723 15:19:50.713103 3714598 cri.go:89] found id: "38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:19:50.713109 3714598 cri.go:89] found id: ""
I0723 15:19:50.713116 3714598 logs.go:276] 2 containers: [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69 38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad]
I0723 15:19:50.713179 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.717803 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.722153 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0723 15:19:50.722230 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0723 15:19:50.775333 3714598 cri.go:89] found id: "21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7"
I0723 15:19:50.775358 3714598 cri.go:89] found id: "9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:19:50.775362 3714598 cri.go:89] found id: ""
I0723 15:19:50.775369 3714598 logs.go:276] 2 containers: [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7 9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b]
I0723 15:19:50.775456 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.785807 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.790120 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0723 15:19:50.790191 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0723 15:19:50.853470 3714598 cri.go:89] found id: "2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a"
I0723 15:19:50.853495 3714598 cri.go:89] found id: "2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:19:50.853500 3714598 cri.go:89] found id: ""
I0723 15:19:50.853507 3714598 logs.go:276] 2 containers: [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a 2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999]
I0723 15:19:50.853565 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.857352 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.861304 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0723 15:19:50.861374 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0723 15:19:50.899012 3714598 cri.go:89] found id: "77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c"
I0723 15:19:50.899033 3714598 cri.go:89] found id: "c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:19:50.899037 3714598 cri.go:89] found id: ""
I0723 15:19:50.899044 3714598 logs.go:276] 2 containers: [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30]
I0723 15:19:50.899111 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.902779 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.906249 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0723 15:19:50.906320 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0723 15:19:50.944973 3714598 cri.go:89] found id: "04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876"
I0723 15:19:50.944997 3714598 cri.go:89] found id: "8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:19:50.945003 3714598 cri.go:89] found id: ""
I0723 15:19:50.945010 3714598 logs.go:276] 2 containers: [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876 8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd]
I0723 15:19:50.945067 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.948972 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:50.952796 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0723 15:19:50.952897 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0723 15:19:50.997448 3714598 cri.go:89] found id: "8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac"
I0723 15:19:50.997473 3714598 cri.go:89] found id: "abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:19:50.997478 3714598 cri.go:89] found id: ""
I0723 15:19:50.997484 3714598 logs.go:276] 2 containers: [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224]
I0723 15:19:50.997570 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.001317 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.007158 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0723 15:19:51.007302 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0723 15:19:51.053632 3714598 cri.go:89] found id: "c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8"
I0723 15:19:51.053712 3714598 cri.go:89] found id: ""
I0723 15:19:51.053737 3714598 logs.go:276] 1 containers: [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8]
I0723 15:19:51.053818 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.058478 3714598 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0723 15:19:51.058585 3714598 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0723 15:19:51.104046 3714598 cri.go:89] found id: "a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2"
I0723 15:19:51.104075 3714598 cri.go:89] found id: "9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6"
I0723 15:19:51.104081 3714598 cri.go:89] found id: ""
I0723 15:19:51.104088 3714598 logs.go:276] 2 containers: [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2 9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6]
I0723 15:19:51.104151 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.109019 3714598 ssh_runner.go:195] Run: which crictl
I0723 15:19:51.113791 3714598 logs.go:123] Gathering logs for dmesg ...
I0723 15:19:51.113868 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0723 15:19:51.140653 3714598 logs.go:123] Gathering logs for describe nodes ...
I0723 15:19:51.140735 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0723 15:19:51.326563 3714598 logs.go:123] Gathering logs for kube-apiserver [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b] ...
I0723 15:19:51.326598 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b"
I0723 15:19:51.386022 3714598 logs.go:123] Gathering logs for kube-apiserver [3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02] ...
I0723 15:19:51.386056 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02"
I0723 15:19:51.437413 3714598 logs.go:123] Gathering logs for kube-proxy [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c] ...
I0723 15:19:51.437468 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c"
I0723 15:19:51.475222 3714598 logs.go:123] Gathering logs for kube-proxy [c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30] ...
I0723 15:19:51.475249 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30"
I0723 15:19:51.522639 3714598 logs.go:123] Gathering logs for kubernetes-dashboard [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8] ...
I0723 15:19:51.522667 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8"
I0723 15:19:51.562605 3714598 logs.go:123] Gathering logs for kubelet ...
I0723 15:19:51.562632 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0723 15:19:51.609723 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997695 651 reflector.go:138] object-"kube-system"/"metrics-server-token-555md": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-555md" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610001 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997781 651 reflector.go:138] object-"kube-system"/"storage-provisioner-token-52k6k": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-52k6k" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610220 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997861 651 reflector.go:138] object-"kube-system"/"coredns-token-jjvjk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jjvjk" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610429 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.997906 651 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610655 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.998074 651 reflector.go:138] object-"kube-system"/"kube-proxy-token-s27dt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-s27dt" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.610861 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:13.998109 651 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.611075 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:14.006437 651 reflector.go:138] object-"kube-system"/"kindnet-token-s2lzs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-s2lzs" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.611283 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:14 old-k8s-version-808561 kubelet[651]: E0723 15:14:14.008948 651 reflector.go:138] object-"default"/"default-token-77r4d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-77r4d" is forbidden: User "system:node:old-k8s-version-808561" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-808561' and this object
W0723 15:19:51.622766 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:16 old-k8s-version-808561 kubelet[651]: E0723 15:14:16.784630 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.622961 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:17 old-k8s-version-808561 kubelet[651]: E0723 15:14:17.688046 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.625780 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:31 old-k8s-version-808561 kubelet[651]: E0723 15:14:31.509921 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.628051 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.489157 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.628581 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.834885 651 pod_workers.go:191] Error syncing pod 696d8a65-c479-4c8f-80f4-2d9b92600046 ("storage-provisioner_kube-system(696d8a65-c479-4c8f-80f4-2d9b92600046)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(696d8a65-c479-4c8f-80f4-2d9b92600046)"
W0723 15:19:51.629068 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:46 old-k8s-version-808561 kubelet[651]: E0723 15:14:46.850656 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.629714 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:47 old-k8s-version-808561 kubelet[651]: E0723 15:14:47.855510 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.630137 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:51 old-k8s-version-808561 kubelet[651]: E0723 15:14:51.896987 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.633294 3714598 logs.go:138] Found kubelet problem: Jul 23 15:14:59 old-k8s-version-808561 kubelet[651]: E0723 15:14:59.501566 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.633904 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:06 old-k8s-version-808561 kubelet[651]: E0723 15:15:06.932141 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.634091 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:11 old-k8s-version-808561 kubelet[651]: E0723 15:15:11.484662 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.634421 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:11 old-k8s-version-808561 kubelet[651]: E0723 15:15:11.896849 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.634608 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:22 old-k8s-version-808561 kubelet[651]: E0723 15:15:22.484956 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.634992 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:23 old-k8s-version-808561 kubelet[651]: E0723 15:15:23.484104 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.635183 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:35 old-k8s-version-808561 kubelet[651]: E0723 15:15:35.487200 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.635780 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:37 old-k8s-version-808561 kubelet[651]: E0723 15:15:37.008061 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.636128 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:41 old-k8s-version-808561 kubelet[651]: E0723 15:15:41.896584 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.638602 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:49 old-k8s-version-808561 kubelet[651]: E0723 15:15:49.495729 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.638938 3714598 logs.go:138] Found kubelet problem: Jul 23 15:15:53 old-k8s-version-808561 kubelet[651]: E0723 15:15:53.484024 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.639126 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:04 old-k8s-version-808561 kubelet[651]: E0723 15:16:04.492638 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.639464 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:08 old-k8s-version-808561 kubelet[651]: E0723 15:16:08.489645 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.639655 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:19 old-k8s-version-808561 kubelet[651]: E0723 15:16:19.484440 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.640251 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:24 old-k8s-version-808561 kubelet[651]: E0723 15:16:24.156639 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.640610 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:31 old-k8s-version-808561 kubelet[651]: E0723 15:16:31.896663 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.640798 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:32 old-k8s-version-808561 kubelet[651]: E0723 15:16:32.484517 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.641132 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:44 old-k8s-version-808561 kubelet[651]: E0723 15:16:44.485094 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.641319 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:47 old-k8s-version-808561 kubelet[651]: E0723 15:16:47.484454 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.641650 3714598 logs.go:138] Found kubelet problem: Jul 23 15:16:57 old-k8s-version-808561 kubelet[651]: E0723 15:16:57.484249 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.641836 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:00 old-k8s-version-808561 kubelet[651]: E0723 15:17:00.498327 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.642170 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:09 old-k8s-version-808561 kubelet[651]: E0723 15:17:09.484011 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.644838 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:12 old-k8s-version-808561 kubelet[651]: E0723 15:17:12.496276 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
W0723 15:19:51.645183 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:24 old-k8s-version-808561 kubelet[651]: E0723 15:17:24.485118 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.645372 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:26 old-k8s-version-808561 kubelet[651]: E0723 15:17:26.486450 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.645705 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:37 old-k8s-version-808561 kubelet[651]: E0723 15:17:37.484898 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.645891 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:37 old-k8s-version-808561 kubelet[651]: E0723 15:17:37.489346 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.646523 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:50 old-k8s-version-808561 kubelet[651]: E0723 15:17:50.400777 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.646714 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:50 old-k8s-version-808561 kubelet[651]: E0723 15:17:50.492734 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.647054 3714598 logs.go:138] Found kubelet problem: Jul 23 15:17:51 old-k8s-version-808561 kubelet[651]: E0723 15:17:51.896923 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.647241 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:02 old-k8s-version-808561 kubelet[651]: E0723 15:18:02.485438 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.652403 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:05 old-k8s-version-808561 kubelet[651]: E0723 15:18:05.484049 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.652623 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:16 old-k8s-version-808561 kubelet[651]: E0723 15:18:16.485295 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.652993 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:17 old-k8s-version-808561 kubelet[651]: E0723 15:18:17.484062 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.653328 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:29 old-k8s-version-808561 kubelet[651]: E0723 15:18:29.484090 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.653517 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:31 old-k8s-version-808561 kubelet[651]: E0723 15:18:31.484369 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.653849 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.487949 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.654032 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.488698 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.654363 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:53 old-k8s-version-808561 kubelet[651]: E0723 15:18:53.484799 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.654549 3714598 logs.go:138] Found kubelet problem: Jul 23 15:18:56 old-k8s-version-808561 kubelet[651]: E0723 15:18:56.487827 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.654735 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:07 old-k8s-version-808561 kubelet[651]: E0723 15:19:07.488216 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.655066 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: E0723 15:19:08.484416 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.655405 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: E0723 15:19:21.484079 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.660007 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.660411 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.660612 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:51.660947 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:45 old-k8s-version-808561 kubelet[651]: E0723 15:19:45.484096 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:51.661134 3714598 logs.go:138] Found kubelet problem: Jul 23 15:19:50 old-k8s-version-808561 kubelet[651]: E0723 15:19:50.484391 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0723 15:19:51.661154 3714598 logs.go:123] Gathering logs for coredns [9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b] ...
I0723 15:19:51.661173 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b"
I0723 15:19:51.711411 3714598 logs.go:123] Gathering logs for kube-scheduler [2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999] ...
I0723 15:19:51.711442 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999"
I0723 15:19:51.772418 3714598 logs.go:123] Gathering logs for kube-controller-manager [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876] ...
I0723 15:19:51.772449 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876"
I0723 15:19:51.846910 3714598 logs.go:123] Gathering logs for containerd ...
I0723 15:19:51.846945 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0723 15:19:51.922227 3714598 logs.go:123] Gathering logs for container status ...
I0723 15:19:51.922272 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0723 15:19:51.972278 3714598 logs.go:123] Gathering logs for coredns [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7] ...
I0723 15:19:51.972362 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7"
I0723 15:19:52.013921 3714598 logs.go:123] Gathering logs for etcd [38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad] ...
I0723 15:19:52.013951 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad"
I0723 15:19:52.064248 3714598 logs.go:123] Gathering logs for kube-scheduler [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a] ...
I0723 15:19:52.064278 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a"
I0723 15:19:52.129820 3714598 logs.go:123] Gathering logs for kindnet [abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224] ...
I0723 15:19:52.129848 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224"
I0723 15:19:52.248551 3714598 logs.go:123] Gathering logs for storage-provisioner [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2] ...
I0723 15:19:52.248587 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2"
I0723 15:19:52.300611 3714598 logs.go:123] Gathering logs for storage-provisioner [9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6] ...
I0723 15:19:52.300639 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6"
I0723 15:19:52.341928 3714598 logs.go:123] Gathering logs for etcd [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69] ...
I0723 15:19:52.341960 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69"
I0723 15:19:52.383724 3714598 logs.go:123] Gathering logs for kindnet [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac] ...
I0723 15:19:52.383755 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac"
I0723 15:19:52.440533 3714598 logs.go:123] Gathering logs for kube-controller-manager [8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd] ...
I0723 15:19:52.440578 3714598 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd"
I0723 15:19:52.512691 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:52.512725 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0723 15:19:52.512855 3714598 out.go:239] X Problems detected in kubelet:
W0723 15:19:52.512876 3714598 out.go:239] Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:52.512904 3714598 out.go:239] Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:52.512914 3714598 out.go:239] Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0723 15:19:52.512922 3714598 out.go:239] Jul 23 15:19:45 old-k8s-version-808561 kubelet[651]: E0723 15:19:45.484096 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
W0723 15:19:52.512932 3714598 out.go:239] Jul 23 15:19:50 old-k8s-version-808561 kubelet[651]: E0723 15:19:50.484391 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0723 15:19:52.512939 3714598 out.go:304] Setting ErrFile to fd 2...
I0723 15:19:52.512947 3714598 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0723 15:19:50.629611 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:51.129152 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:51.628652 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:52.129183 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:52.629138 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:53.128838 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:53.628914 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:54.129550 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:54.629400 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:55.128781 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:55.629304 3724128 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I0723 15:19:55.854708 3724128 kubeadm.go:1113] duration metric: took 12.960467927s to wait for elevateKubeSystemPrivileges
I0723 15:19:55.854735 3724128 kubeadm.go:394] duration metric: took 29.879099724s to StartCluster
I0723 15:19:55.854753 3724128 settings.go:142] acquiring lock: {Name:mk139a8165d464eadea1fdaad6cd0d3bdc374703 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:55.854814 3724128 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19319-3501487/kubeconfig
I0723 15:19:55.856259 3724128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19319-3501487/kubeconfig: {Name:mk28c68c9d9b78842c0266c09085cd617f54ca70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0723 15:19:55.856621 3724128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I0723 15:19:55.856628 3724128 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0723 15:19:55.856869 3724128 config.go:182] Loaded profile config "embed-certs-845285": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0723 15:19:55.856908 3724128 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0723 15:19:55.856976 3724128 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-845285"
I0723 15:19:55.856999 3724128 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-845285"
I0723 15:19:55.857026 3724128 host.go:66] Checking if "embed-certs-845285" exists ...
I0723 15:19:55.857481 3724128 cli_runner.go:164] Run: docker container inspect embed-certs-845285 --format={{.State.Status}}
I0723 15:19:55.857986 3724128 addons.go:69] Setting default-storageclass=true in profile "embed-certs-845285"
I0723 15:19:55.858025 3724128 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-845285"
I0723 15:19:55.858279 3724128 cli_runner.go:164] Run: docker container inspect embed-certs-845285 --format={{.State.Status}}
I0723 15:19:55.859615 3724128 out.go:177] * Verifying Kubernetes components...
I0723 15:19:55.863140 3724128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0723 15:19:55.887421 3724128 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0723 15:19:55.892460 3724128 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0723 15:19:55.892483 3724128 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0723 15:19:55.892552 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:55.912870 3724128 addons.go:234] Setting addon default-storageclass=true in "embed-certs-845285"
I0723 15:19:55.916506 3724128 host.go:66] Checking if "embed-certs-845285" exists ...
I0723 15:19:55.916985 3724128 cli_runner.go:164] Run: docker container inspect embed-certs-845285 --format={{.State.Status}}
I0723 15:19:55.937876 3724128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37481 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa Username:docker}
I0723 15:19:55.954728 3724128 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0723 15:19:55.954751 3724128 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0723 15:19:55.954816 3724128 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-845285
I0723 15:19:55.981817 3724128 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:37481 SSHKeyPath:/home/jenkins/minikube-integration/19319-3501487/.minikube/machines/embed-certs-845285/id_rsa Username:docker}
I0723 15:19:56.279933 3724128 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.85.1 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I0723 15:19:56.280114 3724128 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0723 15:19:56.330551 3724128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0723 15:19:56.362133 3724128 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0723 15:19:56.984072 3724128 node_ready.go:35] waiting up to 6m0s for node "embed-certs-845285" to be "Ready" ...
I0723 15:19:56.984387 3724128 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
I0723 15:19:57.015628 3724128 node_ready.go:49] node "embed-certs-845285" has status "Ready":"True"
I0723 15:19:57.015656 3724128 node_ready.go:38] duration metric: took 31.550296ms for node "embed-certs-845285" to be "Ready" ...
I0723 15:19:57.015666 3724128 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0723 15:19:57.032586 3724128 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-97zfw" in "kube-system" namespace to be "Ready" ...
I0723 15:19:57.363422 3724128 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
I0723 15:19:57.365900 3724128 addons.go:510] duration metric: took 1.508971693s for enable addons: enabled=[default-storageclass storage-provisioner]
I0723 15:19:57.488798 3724128 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-845285" context rescaled to 1 replicas
I0723 15:19:58.536208 3724128 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-97zfw" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-97zfw" not found
I0723 15:19:58.536350 3724128 pod_ready.go:81] duration metric: took 1.503668427s for pod "coredns-7db6d8ff4d-97zfw" in "kube-system" namespace to be "Ready" ...
E0723 15:19:58.536387 3724128 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-97zfw" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-97zfw" not found
I0723 15:19:58.536414 3724128 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-sfc4d" in "kube-system" namespace to be "Ready" ...
I0723 15:20:02.514702 3714598 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
I0723 15:20:02.524878 3714598 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
ok
I0723 15:20:02.527579 3714598 out.go:177]
W0723 15:20:02.529480 3714598 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0723 15:20:02.529519 3714598 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0723 15:20:02.529537 3714598 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0723 15:20:02.529545 3714598 out.go:239] *
W0723 15:20:02.530507 3714598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0723 15:20:02.532802 3714598 out.go:177]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
40f90f2129e06 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 cbd8e7d91a24e dashboard-metrics-scraper-8d5bb5db8-sr6m9
a58712e13026e ba04bb24b9575 5 minutes ago Running storage-provisioner 2 c421ce322f9a9 storage-provisioner
c3749e826f005 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 2c5a7006f0300 kubernetes-dashboard-cd95d586-5px4k
21aa4062c7fa2 db91994f4ee8f 5 minutes ago Running coredns 1 cd95addb114f1 coredns-74ff55c5b-shrvw
77f1843d43162 25a5233254979 5 minutes ago Running kube-proxy 1 343690201b896 kube-proxy-7tf2r
8137bee0aaca8 1611cd07b61d5 5 minutes ago Running busybox 1 3f770f5678f23 busybox
9f2eb986e370c ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 c421ce322f9a9 storage-provisioner
8bc100bf50cc2 f42786f8afd22 5 minutes ago Running kindnet-cni 1 5b7d8484961e1 kindnet-qkzk5
04c63d20cc4fb 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 88c25bc3393a3 kube-controller-manager-old-k8s-version-808561
2abc13facf853 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 c219c4294967e kube-scheduler-old-k8s-version-808561
3cb8ca14a8939 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 5fbf94a6e3f1a kube-apiserver-old-k8s-version-808561
e37be02ba2d0f 05b738aa1bc63 5 minutes ago Running etcd 1 3dc360ea1d07e etcd-old-k8s-version-808561
f5e023dc6c4c7 1611cd07b61d5 6 minutes ago Exited busybox 0 5b9ef067ae373 busybox
9b9f35f4c8329 db91994f4ee8f 8 minutes ago Exited coredns 0 bf7ea3f3d9cf1 coredns-74ff55c5b-shrvw
abcdeac9e387c f42786f8afd22 8 minutes ago Exited kindnet-cni 0 03531f7e1aa93 kindnet-qkzk5
c9c504e2b41d0 25a5233254979 8 minutes ago Exited kube-proxy 0 16eead96360c1 kube-proxy-7tf2r
38e39c13c2600 05b738aa1bc63 9 minutes ago Exited etcd 0 e78c72db600e5 etcd-old-k8s-version-808561
8d2b96d84f9be 1df8a2b116bd1 9 minutes ago Exited kube-controller-manager 0 b48447c48a04a kube-controller-manager-old-k8s-version-808561
2273baa5c8901 e7605f88f17d6 9 minutes ago Exited kube-scheduler 0 9c74fe0ebe558 kube-scheduler-old-k8s-version-808561
3159d2708da6f 2c08bbbc02d3a 9 minutes ago Exited kube-apiserver 0 49e3ef7c62357 kube-apiserver-old-k8s-version-808561
==> containerd <==
Jul 23 15:16:23 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:16:23.525165258Z" level=info msg="CreateContainer within sandbox \"cbd8e7d91a24e071b601403889b5048a189598de6f6fec7fa7e1200aa28f2603\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"58965a46bdfc693d89808a3a56f0229cf0c3618d94b6fb452234a93be2b9eb0b\""
Jul 23 15:16:23 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:16:23.525767060Z" level=info msg="StartContainer for \"58965a46bdfc693d89808a3a56f0229cf0c3618d94b6fb452234a93be2b9eb0b\""
Jul 23 15:16:23 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:16:23.594142608Z" level=info msg="StartContainer for \"58965a46bdfc693d89808a3a56f0229cf0c3618d94b6fb452234a93be2b9eb0b\" returns successfully"
Jul 23 15:16:23 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:16:23.625298976Z" level=info msg="shim disconnected" id=58965a46bdfc693d89808a3a56f0229cf0c3618d94b6fb452234a93be2b9eb0b namespace=k8s.io
Jul 23 15:16:23 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:16:23.625360564Z" level=warning msg="cleaning up after shim disconnected" id=58965a46bdfc693d89808a3a56f0229cf0c3618d94b6fb452234a93be2b9eb0b namespace=k8s.io
Jul 23 15:16:23 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:16:23.625371558Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul 23 15:16:24 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:16:24.158396789Z" level=info msg="RemoveContainer for \"2350341daf411e8939446de76b255b2b568ac42fc383a3337c1142c0a67b8979\""
Jul 23 15:16:24 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:16:24.170770395Z" level=info msg="RemoveContainer for \"2350341daf411e8939446de76b255b2b568ac42fc383a3337c1142c0a67b8979\" returns successfully"
Jul 23 15:17:12 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:12.484869853Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:17:12 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:12.491806048Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
Jul 23 15:17:12 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:12.494271348Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
Jul 23 15:17:12 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:12.494349173Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jul 23 15:17:49 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:49.486072233Z" level=info msg="CreateContainer within sandbox \"cbd8e7d91a24e071b601403889b5048a189598de6f6fec7fa7e1200aa28f2603\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Jul 23 15:17:49 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:49.501933462Z" level=info msg="CreateContainer within sandbox \"cbd8e7d91a24e071b601403889b5048a189598de6f6fec7fa7e1200aa28f2603\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724\""
Jul 23 15:17:49 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:49.502449308Z" level=info msg="StartContainer for \"40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724\""
Jul 23 15:17:49 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:49.575872101Z" level=info msg="StartContainer for \"40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724\" returns successfully"
Jul 23 15:17:49 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:49.610432190Z" level=info msg="shim disconnected" id=40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724 namespace=k8s.io
Jul 23 15:17:49 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:49.610500333Z" level=warning msg="cleaning up after shim disconnected" id=40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724 namespace=k8s.io
Jul 23 15:17:49 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:49.610512132Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul 23 15:17:50 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:50.399958199Z" level=info msg="RemoveContainer for \"58965a46bdfc693d89808a3a56f0229cf0c3618d94b6fb452234a93be2b9eb0b\""
Jul 23 15:17:50 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:17:50.406153114Z" level=info msg="RemoveContainer for \"58965a46bdfc693d89808a3a56f0229cf0c3618d94b6fb452234a93be2b9eb0b\" returns successfully"
Jul 23 15:20:03 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:20:03.485039971Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:20:03 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:20:03.500206125Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
Jul 23 15:20:03 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:20:03.502127658Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
Jul 23 15:20:03 old-k8s-version-808561 containerd[563]: time="2024-07-23T15:20:03.502203580Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [21aa4062c7fa266fbb0228514cff692fc11d5b6b64ac0b7c378ab70e75236cf7] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 46228aa6486c18e5dfc83d68f867dea6
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:45941 - 47201 "HINFO IN 7190331770475783895.3616161569229029271. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011235795s
==> coredns [9b9f35f4c832926eaefabf2389f9bde2cd7e018ab782dc7960abd3c4392d938b] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 46228aa6486c18e5dfc83d68f867dea6
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:42576 - 16384 "HINFO IN 8883844995071645045.748681092552363058. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021792737s
==> describe nodes <==
Name: old-k8s-version-808561
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-808561
kubernetes.io/os=linux
minikube.k8s.io/commit=8c09ef4366b737aec8aaa2bbc590bde27da814a6
minikube.k8s.io/name=old-k8s-version-808561
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_23T15_11_12_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 23 Jul 2024 15:11:09 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-808561
AcquireTime: <unset>
RenewTime: Tue, 23 Jul 2024 15:19:56 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 23 Jul 2024 15:15:14 +0000 Tue, 23 Jul 2024 15:11:02 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 23 Jul 2024 15:15:14 +0000 Tue, 23 Jul 2024 15:11:02 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 23 Jul 2024 15:15:14 +0000 Tue, 23 Jul 2024 15:11:02 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 23 Jul 2024 15:15:14 +0000 Tue, 23 Jul 2024 15:11:27 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.94.2
Hostname: old-k8s-version-808561
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022364Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022364Ki
pods: 110
System Info:
Machine ID: 1ea8a8dd89c6424abe2b4ecec91bfc5c
System UUID: c5fd57e7-b95d-4a28-afdd-d156400a21d6
Boot ID: 504c9c06-2714-4b6b-86f9-ce6cd916d665
Kernel Version: 5.15.0-1065-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.19
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m39s
kube-system coredns-74ff55c5b-shrvw 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 8m37s
kube-system etcd-old-k8s-version-808561 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 8m44s
kube-system kindnet-qkzk5 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 8m37s
kube-system kube-apiserver-old-k8s-version-808561 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m44s
kube-system kube-controller-manager-old-k8s-version-808561 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m44s
kube-system kube-proxy-7tf2r 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m37s
kube-system kube-scheduler-old-k8s-version-808561 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m44s
kube-system metrics-server-9975d5f86-mrdtz 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (2%!)(MISSING) 0 (0%!)(MISSING) 6m27s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m35s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-sr6m9 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m33s
kubernetes-dashboard kubernetes-dashboard-cd95d586-5px4k 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m33s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%!)(MISSING) 100m (5%!)(MISSING)
memory 420Mi (5%!)(MISSING) 220Mi (2%!)(MISSING)
ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 9m3s (x4 over 9m3s) kubelet Node old-k8s-version-808561 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m3s (x4 over 9m3s) kubelet Node old-k8s-version-808561 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m3s (x4 over 9m3s) kubelet Node old-k8s-version-808561 status is now: NodeHasSufficientPID
Normal Starting 8m44s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m44s kubelet Node old-k8s-version-808561 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m44s kubelet Node old-k8s-version-808561 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m44s kubelet Node old-k8s-version-808561 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m44s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m37s kubelet Node old-k8s-version-808561 status is now: NodeReady
Normal Starting 8m35s kube-proxy Starting kube-proxy.
Normal Starting 6m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m (x9 over 6m) kubelet Node old-k8s-version-808561 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m (x7 over 6m) kubelet Node old-k8s-version-808561 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m (x7 over 6m) kubelet Node old-k8s-version-808561 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m kubelet Updated Node Allocatable limit across pods
Normal Starting 5m48s kube-proxy Starting kube-proxy.
==> dmesg <==
[ +0.001051] FS-Cache: O-key=[8] '9d505c0100000000'
[ +0.000704] FS-Cache: N-cookie c=000000e4 [p=000000db fl=2 nc=0 na=1]
[ +0.000982] FS-Cache: N-cookie d=000000004f7e18fb{9p.inode} n=00000000e8ba37de
[ +0.001052] FS-Cache: N-key=[8] '9d505c0100000000'
[ +0.003265] FS-Cache: Duplicate cookie detected
[ +0.000790] FS-Cache: O-cookie c=000000de [p=000000db fl=226 nc=0 na=1]
[ +0.001026] FS-Cache: O-cookie d=000000004f7e18fb{9p.inode} n=00000000eb1029d9
[ +0.001114] FS-Cache: O-key=[8] '9d505c0100000000'
[ +0.000804] FS-Cache: N-cookie c=000000e5 [p=000000db fl=2 nc=0 na=1]
[ +0.001043] FS-Cache: N-cookie d=000000004f7e18fb{9p.inode} n=0000000065a1ff71
[ +0.001196] FS-Cache: N-key=[8] '9d505c0100000000'
[ +3.717910] FS-Cache: Duplicate cookie detected
[ +0.000712] FS-Cache: O-cookie c=000000dc [p=000000db fl=226 nc=0 na=1]
[ +0.000981] FS-Cache: O-cookie d=000000004f7e18fb{9p.inode} n=00000000feabc226
[ +0.001046] FS-Cache: O-key=[8] '9c505c0100000000'
[ +0.000737] FS-Cache: N-cookie c=000000e7 [p=000000db fl=2 nc=0 na=1]
[ +0.000936] FS-Cache: N-cookie d=000000004f7e18fb{9p.inode} n=0000000018e8e3ac
[ +0.001048] FS-Cache: N-key=[8] '9c505c0100000000'
[ +0.272319] FS-Cache: Duplicate cookie detected
[ +0.000716] FS-Cache: O-cookie c=000000e1 [p=000000db fl=226 nc=0 na=1]
[ +0.001032] FS-Cache: O-cookie d=000000004f7e18fb{9p.inode} n=0000000049ed036f
[ +0.001069] FS-Cache: O-key=[8] 'a2505c0100000000'
[ +0.000751] FS-Cache: N-cookie c=000000e8 [p=000000db fl=2 nc=0 na=1]
[ +0.000962] FS-Cache: N-cookie d=000000004f7e18fb{9p.inode} n=00000000a6870089
[ +0.001082] FS-Cache: N-key=[8] 'a2505c0100000000'
==> etcd [38e39c13c2600ff495950450b67dd008ef9c4a6ec1fdec9d64ea24b3e10869ad] <==
raft2024/07/23 15:11:02 INFO: dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2
raft2024/07/23 15:11:02 INFO: dfc97eb0aae75b33 became leader at term 2
raft2024/07/23 15:11:02 INFO: raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2
2024-07-23 15:11:02.567395 I | etcdserver: published {Name:old-k8s-version-808561 ClientURLs:[https://192.168.94.2:2379]} to cluster da400bbece288f5a
2024-07-23 15:11:02.567464 I | embed: ready to serve client requests
2024-07-23 15:11:02.568952 I | embed: serving client requests on 127.0.0.1:2379
2024-07-23 15:11:02.569135 I | embed: ready to serve client requests
2024-07-23 15:11:02.571111 I | embed: serving client requests on 192.168.94.2:2379
2024-07-23 15:11:02.580825 I | etcdserver: setting up the initial cluster version to 3.4
2024-07-23 15:11:02.581269 N | etcdserver/membership: set the initial cluster version to 3.4
2024-07-23 15:11:02.581440 I | etcdserver/api: enabled capabilities for version 3.4
2024-07-23 15:11:26.604061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:11:30.323620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:11:40.323594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:11:50.323698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:12:00.323918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:12:10.323530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:12:20.323582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:12:30.323642 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:12:40.323538 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:12:50.324520 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:13:00.323737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:13:10.323535 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:13:20.323585 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:13:30.323620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [e37be02ba2d0f57794f59bc34b7524e2cabaa4842989046f4cc7e23d25f39e69] <==
2024-07-23 15:16:02.565146 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:16:12.565286 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:16:22.565175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:16:32.565389 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:16:42.565097 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:16:52.565310 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:17:02.565484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:17:12.565318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:17:22.565142 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:17:32.565197 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:17:42.565201 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:17:52.565133 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:18:02.565234 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:18:12.565445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:18:22.565247 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:18:32.565239 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:18:42.565273 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:18:52.565297 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:19:02.565377 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:19:12.565246 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:19:22.565313 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:19:32.565411 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:19:42.565444 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:19:52.565276 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-23 15:20:02.566469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
15:20:04 up 1 day, 2 min, 0 users, load average: 2.34, 2.18, 2.49
Linux old-k8s-version-808561 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kindnet [8bc100bf50cc2e66005ceaad20a50169bed9648f1cc20d26b677c4fb025a9fac] <==
E0723 15:18:47.587313 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
W0723 15:18:55.070468 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0723 15:18:55.070510 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0723 15:18:56.741780 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:18:56.741815 1 main.go:299] handling current node
W0723 15:19:04.247907 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0723 15:19:04.248114 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
I0723 15:19:06.737854 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:19:06.737888 1 main.go:299] handling current node
I0723 15:19:16.737651 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:19:16.737691 1 main.go:299] handling current node
I0723 15:19:26.738296 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:19:26.738343 1 main.go:299] handling current node
W0723 15:19:34.827647 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0723 15:19:34.828768 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0723 15:19:36.737917 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:19:36.737959 1 main.go:299] handling current node
I0723 15:19:46.737840 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:19:46.737950 1 main.go:299] handling current node
W0723 15:19:53.872491 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0723 15:19:53.872550 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
W0723 15:19:54.377992 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0723 15:19:54.378034 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
I0723 15:19:56.737598 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:19:56.737643 1 main.go:299] handling current node
==> kindnet [abcdeac9e387cc016f5e02c2e96ce73ae7ca02e7a9fa0c5fe944d45e923f6224] <==
I0723 15:12:32.538358 1 main.go:299] handling current node
W0723 15:12:40.050607 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0723 15:12:40.050646 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0723 15:12:42.538117 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:12:42.538158 1 main.go:299] handling current node
W0723 15:12:46.063654 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0723 15:12:46.063692 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
W0723 15:12:46.072093 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0723 15:12:46.072212 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0723 15:12:52.537711 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:12:52.537749 1 main.go:299] handling current node
I0723 15:13:02.537510 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:13:02.537552 1 main.go:299] handling current node
I0723 15:13:12.537788 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:13:12.537824 1 main.go:299] handling current node
W0723 15:13:20.639681 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0723 15:13:20.639772 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0723 15:13:22.537460 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:13:22.537497 1 main.go:299] handling current node
W0723 15:13:23.288689 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0723 15:13:23.288940 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0723 15:13:32.538026 1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
I0723 15:13:32.538066 1 main.go:299] handling current node
W0723 15:13:35.388516 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0723 15:13:35.388560 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
==> kube-apiserver [3159d2708da6f1d96a70d2780973e1205517f8b923b98298da2fe9bcd1974d02] <==
I0723 15:11:09.935988 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0723 15:11:10.430572 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0723 15:11:10.477904 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0723 15:11:10.632646 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
I0723 15:11:10.634519 1 controller.go:606] quota admission added evaluator for: endpoints
I0723 15:11:10.638934 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0723 15:11:11.701267 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0723 15:11:11.955697 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0723 15:11:12.041569 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0723 15:11:20.542228 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0723 15:11:27.773600 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0723 15:11:27.910040 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0723 15:11:35.247818 1 client.go:360] parsed scheme: "passthrough"
I0723 15:11:35.247861 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:11:35.247872 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0723 15:12:07.886016 1 client.go:360] parsed scheme: "passthrough"
I0723 15:12:07.886060 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:12:07.886070 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0723 15:12:51.569088 1 client.go:360] parsed scheme: "passthrough"
I0723 15:12:51.569139 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:12:51.569148 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0723 15:13:31.791981 1 client.go:360] parsed scheme: "passthrough"
I0723 15:13:31.792040 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:13:31.792049 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0723 15:13:34.989373 1 upgradeaware.go:387] Error proxying data from backend to client: write tcp 192.168.94.2:8443->192.168.94.1:34468: write: broken pipe
==> kube-apiserver [3cb8ca14a893978278b78d499126ef1ab9dcc5899e20166fc82e26dc6ce8651b] <==
I0723 15:17:10.214461 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:17:10.214596 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0723 15:17:16.424169 1 handler_proxy.go:102] no RequestInfo found in the context
E0723 15:17:16.424414 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0723 15:17:16.424431 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0723 15:17:41.584539 1 client.go:360] parsed scheme: "passthrough"
I0723 15:17:41.584593 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:17:41.584602 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0723 15:18:16.650299 1 client.go:360] parsed scheme: "passthrough"
I0723 15:18:16.650344 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:18:16.650352 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0723 15:18:50.709786 1 client.go:360] parsed scheme: "passthrough"
I0723 15:18:50.709841 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:18:50.709851 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0723 15:19:14.979852 1 handler_proxy.go:102] no RequestInfo found in the context
E0723 15:19:14.979925 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0723 15:19:14.979933 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0723 15:19:22.177323 1 client.go:360] parsed scheme: "passthrough"
I0723 15:19:22.177367 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:19:22.177375 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0723 15:19:58.716094 1 client.go:360] parsed scheme: "passthrough"
I0723 15:19:58.716139 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0723 15:19:58.716148 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [04c63d20cc4fb561309df6c5509f4d19e50fed4cedd9d497d3766c8dc8a47876] <==
W0723 15:15:37.152859 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0723 15:16:03.220580 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0723 15:16:08.803380 1 request.go:655] Throttling request took 1.048400259s, request: GET:https://192.168.94.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W0723 15:16:09.654798 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0723 15:16:33.722380 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0723 15:16:41.305294 1 request.go:655] Throttling request took 1.048426599s, request: GET:https://192.168.94.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0723 15:16:42.157129 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0723 15:17:04.224453 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0723 15:17:13.808181 1 request.go:655] Throttling request took 1.048230101s, request: GET:https://192.168.94.2:8443/apis/node.k8s.io/v1?timeout=32s
W0723 15:17:14.660617 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0723 15:17:34.726302 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0723 15:17:46.310969 1 request.go:655] Throttling request took 1.048248187s, request: GET:https://192.168.94.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0723 15:17:47.162385 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0723 15:18:05.228058 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0723 15:18:18.813101 1 request.go:655] Throttling request took 1.04843578s, request: GET:https://192.168.94.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0723 15:18:19.664495 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0723 15:18:35.730147 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0723 15:18:51.315005 1 request.go:655] Throttling request took 1.048467514s, request: GET:https://192.168.94.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0723 15:18:52.166533 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0723 15:19:06.239548 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0723 15:19:23.817014 1 request.go:655] Throttling request took 1.048100888s, request: GET:https://192.168.94.2:8443/apis/extensions/v1beta1?timeout=32s
W0723 15:19:24.668565 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0723 15:19:36.741544 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0723 15:19:56.318942 1 request.go:655] Throttling request took 1.048451006s, request: GET:https://192.168.94.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0723 15:19:57.170800 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [8d2b96d84f9beabad8dfeef84f6ffc6ad2decaaba936bea00e02d76bdbda14bd] <==
I0723 15:11:27.742470 1 shared_informer.go:247] Caches are synced for deployment
I0723 15:11:27.746144 1 shared_informer.go:247] Caches are synced for ReplicaSet
I0723 15:11:27.746187 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I0723 15:11:27.754629 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0723 15:11:27.798326 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0723 15:11:27.829372 1 shared_informer.go:247] Caches are synced for endpoint
I0723 15:11:27.834070 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-wjs94"
I0723 15:11:27.848663 1 shared_informer.go:247] Caches are synced for resource quota
I0723 15:11:27.876727 1 shared_informer.go:247] Caches are synced for daemon sets
I0723 15:11:27.876839 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
I0723 15:11:27.887276 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-shrvw"
I0723 15:11:27.888195 1 shared_informer.go:247] Caches are synced for resource quota
I0723 15:11:27.896405 1 shared_informer.go:247] Caches are synced for stateful set
I0723 15:11:27.953890 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-qkzk5"
I0723 15:11:27.994939 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-7tf2r"
I0723 15:11:28.057602 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0723 15:11:28.314042 1 shared_informer.go:247] Caches are synced for garbage collector
I0723 15:11:28.314103 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0723 15:11:28.357735 1 shared_informer.go:247] Caches are synced for garbage collector
I0723 15:11:28.955626 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0723 15:11:28.987761 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-wjs94"
I0723 15:11:32.651684 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0723 15:13:35.938967 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0723 15:13:36.002103 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
E0723 15:13:36.030945 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
==> kube-proxy [77f1843d43162efaa0e63b93c5b96cc199e0b38572c500dcddd29726d4adfe5c] <==
I0723 15:14:16.668691 1 node.go:172] Successfully retrieved node IP: 192.168.94.2
I0723 15:14:16.668765 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.94.2), assume IPv4 operation
W0723 15:14:16.756805 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0723 15:14:16.757089 1 server_others.go:185] Using iptables Proxier.
I0723 15:14:16.757770 1 server.go:650] Version: v1.20.0
I0723 15:14:16.761115 1 config.go:315] Starting service config controller
I0723 15:14:16.761134 1 shared_informer.go:240] Waiting for caches to sync for service config
I0723 15:14:16.761156 1 config.go:224] Starting endpoint slice config controller
I0723 15:14:16.761160 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0723 15:14:16.861251 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0723 15:14:16.861319 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [c9c504e2b41d00fc39d4a686e3de24662ff23482132ab2d8c49dd72a8f6e1e30] <==
I0723 15:11:29.418785 1 node.go:172] Successfully retrieved node IP: 192.168.94.2
I0723 15:11:29.418879 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.94.2), assume IPv4 operation
W0723 15:11:29.441712 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0723 15:11:29.441805 1 server_others.go:185] Using iptables Proxier.
I0723 15:11:29.442339 1 server.go:650] Version: v1.20.0
I0723 15:11:29.443072 1 config.go:315] Starting service config controller
I0723 15:11:29.443089 1 shared_informer.go:240] Waiting for caches to sync for service config
I0723 15:11:29.443107 1 config.go:224] Starting endpoint slice config controller
I0723 15:11:29.443111 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0723 15:11:29.543232 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0723 15:11:29.543311 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [2273baa5c89014be6a24c8085e46387601aaab2734c312e6742364f10ed53999] <==
I0723 15:11:04.370416 1 serving.go:331] Generated self-signed cert in-memory
W0723 15:11:09.075826 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0723 15:11:09.075870 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0723 15:11:09.075883 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0723 15:11:09.075889 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0723 15:11:09.132585 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0723 15:11:09.132740 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0723 15:11:09.135592 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0723 15:11:09.135653 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0723 15:11:09.154194 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0723 15:11:09.154646 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0723 15:11:09.159935 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0723 15:11:09.160969 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0723 15:11:09.162598 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0723 15:11:09.166632 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0723 15:11:09.166742 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0723 15:11:09.201476 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0723 15:11:09.202276 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0723 15:11:09.202537 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0723 15:11:09.203453 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0723 15:11:09.203652 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0723 15:11:10.022062 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0723 15:11:10.128193 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0723 15:11:10.213699 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
I0723 15:11:10.733001 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [2abc13facf853178023a82b8f7d0d8043cae5cd5abe682e9a8d7afba643d968a] <==
I0723 15:14:07.320606 1 serving.go:331] Generated self-signed cert in-memory
W0723 15:14:13.861403 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0723 15:14:13.861444 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0723 15:14:13.861460 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0723 15:14:13.861469 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0723 15:14:14.045737 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0723 15:14:14.049765 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0723 15:14:14.049824 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0723 15:14:14.057287 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0723 15:14:14.176446 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jul 23 15:18:31 old-k8s-version-808561 kubelet[651]: E0723 15:18:31.484369 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: I0723 15:18:42.484625 651 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724
Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.487949 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:18:42 old-k8s-version-808561 kubelet[651]: E0723 15:18:42.488698 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:18:53 old-k8s-version-808561 kubelet[651]: I0723 15:18:53.483878 651 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724
Jul 23 15:18:53 old-k8s-version-808561 kubelet[651]: E0723 15:18:53.484799 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:18:56 old-k8s-version-808561 kubelet[651]: E0723 15:18:56.487827 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:07 old-k8s-version-808561 kubelet[651]: E0723 15:19:07.488216 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: I0723 15:19:08.483831 651 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724
Jul 23 15:19:08 old-k8s-version-808561 kubelet[651]: E0723 15:19:08.484416 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: I0723 15:19:21.483727 651 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724
Jul 23 15:19:21 old-k8s-version-808561 kubelet[651]: E0723 15:19:21.484079 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:22 old-k8s-version-808561 kubelet[651]: E0723 15:19:22.484915 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: I0723 15:19:34.484760 651 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724
Jul 23 15:19:34 old-k8s-version-808561 kubelet[651]: E0723 15:19:34.485043 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:37 old-k8s-version-808561 kubelet[651]: E0723 15:19:37.484452 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:45 old-k8s-version-808561 kubelet[651]: I0723 15:19:45.483763 651 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724
Jul 23 15:19:45 old-k8s-version-808561 kubelet[651]: E0723 15:19:45.484096 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:19:50 old-k8s-version-808561 kubelet[651]: E0723 15:19:50.484391 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 23 15:19:57 old-k8s-version-808561 kubelet[651]: I0723 15:19:57.483712 651 scope.go:95] [topologymanager] RemoveContainer - Container ID: 40f90f2129e06ae1cb2751551cd91da01175e4cbfefd80004fc968afdc087724
Jul 23 15:19:57 old-k8s-version-808561 kubelet[651]: E0723 15:19:57.484066 651 pod_workers.go:191] Error syncing pod 8c7db1ff-462b-4ade-ae6f-eaffc90be86e ("dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-sr6m9_kubernetes-dashboard(8c7db1ff-462b-4ade-ae6f-eaffc90be86e)"
Jul 23 15:20:03 old-k8s-version-808561 kubelet[651]: E0723 15:20:03.502593 651 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host
Jul 23 15:20:03 old-k8s-version-808561 kubelet[651]: E0723 15:20:03.502656 651 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host
Jul 23 15:20:03 old-k8s-version-808561 kubelet[651]: E0723 15:20:03.502792 651 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-555md,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-mrdtz_kube-system(ab74cab
2-36bf-4519-ab43-b8fe42df7e7e): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host
Jul 23 15:20:03 old-k8s-version-808561 kubelet[651]: E0723 15:20:03.502840 651 pod_workers.go:191] Error syncing pod ab74cab2-36bf-4519-ab43-b8fe42df7e7e ("metrics-server-9975d5f86-mrdtz_kube-system(ab74cab2-36bf-4519-ab43-b8fe42df7e7e)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
==> kubernetes-dashboard [c3749e826f005ff20043d583968c146150730ffbee9bdd7862b76867df122ff8] <==
2024/07/23 15:14:38 Starting overwatch
2024/07/23 15:14:38 Using namespace: kubernetes-dashboard
2024/07/23 15:14:38 Using in-cluster config to connect to apiserver
2024/07/23 15:14:38 Using secret token for csrf signing
2024/07/23 15:14:38 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/07/23 15:14:38 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/07/23 15:14:38 Successful initial request to the apiserver, version: v1.20.0
2024/07/23 15:14:38 Generating JWE encryption key
2024/07/23 15:14:38 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/07/23 15:14:38 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/07/23 15:14:38 Initializing JWE encryption key from synchronized object
2024/07/23 15:14:38 Creating in-cluster Sidecar client
2024/07/23 15:14:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:14:38 Serving insecurely on HTTP port: 9090
2024/07/23 15:15:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:15:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:16:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:16:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:17:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:17:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:18:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:18:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:19:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/23 15:19:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [9f2eb986e370cdadbd32cc535d065608dd8bfe2031e3e18c6ff9a18ca552f3d6] <==
I0723 15:14:16.403595 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0723 15:14:46.409964 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [a58712e13026ef6f6c2df7e40f9b14f87e43e72fd607766621dcab40c2067af2] <==
I0723 15:14:59.639258 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0723 15:14:59.653407 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0723 15:14:59.653476 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0723 15:15:17.114748 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0723 15:15:17.115002 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"87b5c846-41d9-4aaa-aba6-e64f8e82b0bf", APIVersion:"v1", ResourceVersion:"877", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-808561_9389f4a2-e2a0-4185-8003-110389ef1453 became leader
I0723 15:15:17.115395 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-808561_9389f4a2-e2a0-4185-8003-110389ef1453!
I0723 15:15:17.222172 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-808561_9389f4a2-e2a0-4185-8003-110389ef1453!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-808561 -n old-k8s-version-808561
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-808561 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-mrdtz
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-808561 describe pod metrics-server-9975d5f86-mrdtz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-808561 describe pod metrics-server-9975d5f86-mrdtz: exit status 1 (154.441128ms)
** stderr **
E0723 15:20:05.806851 3728159 memcache.go:287] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0723 15:20:05.833653 3728159 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0723 15:20:05.837898 3728159 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0723 15:20:05.840768 3728159 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0723 15:20:05.848748 3728159 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0723 15:20:05.851412 3728159 memcache.go:121] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Error from server (NotFound): pods "metrics-server-9975d5f86-mrdtz" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-808561 describe pod metrics-server-9975d5f86-mrdtz: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (376.71s)