=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-452467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-452467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m15.837194973s)
-- stdout --
* [old-k8s-version-452467] minikube v1.34.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20062
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20062-1103064/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-1103064/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-452467" primary control-plane node in "old-k8s-version-452467" cluster
* Pulling base image v0.0.45-1730888964-19917 ...
* Restarting existing docker container for "old-k8s-version-452467" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-452467 addons enable metrics-server
* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
-- /stdout --
** stderr **
I1210 00:32:16.430405 1317926 out.go:345] Setting OutFile to fd 1 ...
I1210 00:32:16.430536 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:32:16.430546 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:32:16.430552 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:32:16.431494 1317926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-1103064/.minikube/bin
I1210 00:32:16.432022 1317926 out.go:352] Setting JSON to false
I1210 00:32:16.433186 1317926 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29665,"bootTime":1733761072,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1210 00:32:16.433295 1317926 start.go:139] virtualization:
I1210 00:32:16.437322 1317926 out.go:177] * [old-k8s-version-452467] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1210 00:32:16.440573 1317926 notify.go:220] Checking for updates...
I1210 00:32:16.444329 1317926 out.go:177] - MINIKUBE_LOCATION=20062
I1210 00:32:16.447226 1317926 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1210 00:32:16.450164 1317926 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20062-1103064/kubeconfig
I1210 00:32:16.453045 1317926 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-1103064/.minikube
I1210 00:32:16.455928 1317926 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1210 00:32:16.458717 1317926 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1210 00:32:16.462120 1317926 config.go:182] Loaded profile config "old-k8s-version-452467": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1210 00:32:16.465663 1317926 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
I1210 00:32:16.468478 1317926 driver.go:394] Setting default libvirt URI to qemu:///system
I1210 00:32:16.503790 1317926 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
I1210 00:32:16.503977 1317926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 00:32:16.559990 1317926 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-10 00:32:16.550458891 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1210 00:32:16.560112 1317926 docker.go:318] overlay module found
I1210 00:32:16.563170 1317926 out.go:177] * Using the docker driver based on existing profile
I1210 00:32:16.566018 1317926 start.go:297] selected driver: docker
I1210 00:32:16.566047 1317926 start.go:901] validating driver "docker" against &{Name:old-k8s-version-452467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-452467 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 00:32:16.566163 1317926 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1210 00:32:16.566893 1317926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 00:32:16.626883 1317926 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-10 00:32:16.616042831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1210 00:32:16.627298 1317926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1210 00:32:16.627325 1317926 cni.go:84] Creating CNI manager for ""
I1210 00:32:16.627372 1317926 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1210 00:32:16.627415 1317926 start.go:340] cluster config:
{Name:old-k8s-version-452467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-452467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 00:32:16.630672 1317926 out.go:177] * Starting "old-k8s-version-452467" primary control-plane node in "old-k8s-version-452467" cluster
I1210 00:32:16.633513 1317926 cache.go:121] Beginning downloading kic base image for docker with containerd
I1210 00:32:16.636404 1317926 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
I1210 00:32:16.639317 1317926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1210 00:32:16.639393 1317926 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-1103064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I1210 00:32:16.639411 1317926 cache.go:56] Caching tarball of preloaded images
I1210 00:32:16.639417 1317926 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
I1210 00:32:16.639529 1317926 preload.go:172] Found /home/jenkins/minikube-integration/20062-1103064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1210 00:32:16.639541 1317926 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I1210 00:32:16.639686 1317926 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/config.json ...
I1210 00:32:16.666593 1317926 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
I1210 00:32:16.666614 1317926 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
I1210 00:32:16.666627 1317926 cache.go:194] Successfully downloaded all kic artifacts
I1210 00:32:16.666711 1317926 start.go:360] acquireMachinesLock for old-k8s-version-452467: {Name:mk0fbb3fd7621188ae1793bc1f24cd73e894122b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:32:16.666805 1317926 start.go:364] duration metric: took 70.751µs to acquireMachinesLock for "old-k8s-version-452467"
I1210 00:32:16.666879 1317926 start.go:96] Skipping create...Using existing machine configuration
I1210 00:32:16.666888 1317926 fix.go:54] fixHost starting:
I1210 00:32:16.667243 1317926 cli_runner.go:164] Run: docker container inspect old-k8s-version-452467 --format={{.State.Status}}
I1210 00:32:16.696032 1317926 fix.go:112] recreateIfNeeded on old-k8s-version-452467: state=Stopped err=<nil>
W1210 00:32:16.696067 1317926 fix.go:138] unexpected machine state, will restart: <nil>
I1210 00:32:16.699444 1317926 out.go:177] * Restarting existing docker container for "old-k8s-version-452467" ...
I1210 00:32:16.702338 1317926 cli_runner.go:164] Run: docker start old-k8s-version-452467
I1210 00:32:17.017316 1317926 cli_runner.go:164] Run: docker container inspect old-k8s-version-452467 --format={{.State.Status}}
I1210 00:32:17.038440 1317926 kic.go:430] container "old-k8s-version-452467" state is running.
I1210 00:32:17.038857 1317926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-452467
I1210 00:32:17.065650 1317926 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/config.json ...
I1210 00:32:17.065879 1317926 machine.go:93] provisionDockerMachine start ...
I1210 00:32:17.065958 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:17.095370 1317926 main.go:141] libmachine: Using SSH client type: native
I1210 00:32:17.095942 1317926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 34523 <nil> <nil>}
I1210 00:32:17.095961 1317926 main.go:141] libmachine: About to run SSH command:
hostname
I1210 00:32:17.096665 1317926 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1210 00:32:20.228665 1317926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-452467
I1210 00:32:20.228689 1317926 ubuntu.go:169] provisioning hostname "old-k8s-version-452467"
I1210 00:32:20.228752 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:20.246677 1317926 main.go:141] libmachine: Using SSH client type: native
I1210 00:32:20.246923 1317926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 34523 <nil> <nil>}
I1210 00:32:20.246945 1317926 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-452467 && echo "old-k8s-version-452467" | sudo tee /etc/hostname
I1210 00:32:20.387084 1317926 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-452467
I1210 00:32:20.387168 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:20.404798 1317926 main.go:141] libmachine: Using SSH client type: native
I1210 00:32:20.405059 1317926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 34523 <nil> <nil>}
I1210 00:32:20.405082 1317926 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-452467' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-452467/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-452467' | sudo tee -a /etc/hosts;
fi
fi
I1210 00:32:20.529597 1317926 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1210 00:32:20.529627 1317926 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20062-1103064/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-1103064/.minikube}
I1210 00:32:20.529652 1317926 ubuntu.go:177] setting up certificates
I1210 00:32:20.529663 1317926 provision.go:84] configureAuth start
I1210 00:32:20.529734 1317926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-452467
I1210 00:32:20.548055 1317926 provision.go:143] copyHostCerts
I1210 00:32:20.548144 1317926 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.pem, removing ...
I1210 00:32:20.548159 1317926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.pem
I1210 00:32:20.548235 1317926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.pem (1078 bytes)
I1210 00:32:20.548345 1317926 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-1103064/.minikube/cert.pem, removing ...
I1210 00:32:20.548356 1317926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-1103064/.minikube/cert.pem
I1210 00:32:20.548383 1317926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-1103064/.minikube/cert.pem (1123 bytes)
I1210 00:32:20.548453 1317926 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-1103064/.minikube/key.pem, removing ...
I1210 00:32:20.548461 1317926 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-1103064/.minikube/key.pem
I1210 00:32:20.548487 1317926 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-1103064/.minikube/key.pem (1679 bytes)
I1210 00:32:20.548551 1317926 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-452467 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-452467]
I1210 00:32:21.023117 1317926 provision.go:177] copyRemoteCerts
I1210 00:32:21.023194 1317926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1210 00:32:21.023242 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:21.041774 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:21.130202 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1210 00:32:21.155805 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1210 00:32:21.180007 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1210 00:32:21.205147 1317926 provision.go:87] duration metric: took 675.455087ms to configureAuth
I1210 00:32:21.205176 1317926 ubuntu.go:193] setting minikube options for container-runtime
I1210 00:32:21.205422 1317926 config.go:182] Loaded profile config "old-k8s-version-452467": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1210 00:32:21.205435 1317926 machine.go:96] duration metric: took 4.139539911s to provisionDockerMachine
I1210 00:32:21.205444 1317926 start.go:293] postStartSetup for "old-k8s-version-452467" (driver="docker")
I1210 00:32:21.205460 1317926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1210 00:32:21.205519 1317926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1210 00:32:21.205561 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:21.222192 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:21.310361 1317926 ssh_runner.go:195] Run: cat /etc/os-release
I1210 00:32:21.313415 1317926 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1210 00:32:21.313459 1317926 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1210 00:32:21.313470 1317926 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1210 00:32:21.313480 1317926 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1210 00:32:21.313495 1317926 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-1103064/.minikube/addons for local assets ...
I1210 00:32:21.313553 1317926 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-1103064/.minikube/files for local assets ...
I1210 00:32:21.313639 1317926 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-1103064/.minikube/files/etc/ssl/certs/11084502.pem -> 11084502.pem in /etc/ssl/certs
I1210 00:32:21.313746 1317926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1210 00:32:21.322438 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/files/etc/ssl/certs/11084502.pem --> /etc/ssl/certs/11084502.pem (1708 bytes)
I1210 00:32:21.346529 1317926 start.go:296] duration metric: took 141.063275ms for postStartSetup
I1210 00:32:21.346625 1317926 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1210 00:32:21.346665 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:21.363483 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:21.455454 1317926 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1210 00:32:21.460010 1317926 fix.go:56] duration metric: took 4.793115589s for fixHost
I1210 00:32:21.460037 1317926 start.go:83] releasing machines lock for "old-k8s-version-452467", held for 4.793218904s
I1210 00:32:21.460123 1317926 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-452467
I1210 00:32:21.477417 1317926 ssh_runner.go:195] Run: cat /version.json
I1210 00:32:21.477479 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:21.477694 1317926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1210 00:32:21.477768 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:21.494348 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:21.517233 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:21.580746 1317926 ssh_runner.go:195] Run: systemctl --version
I1210 00:32:21.740920 1317926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1210 00:32:21.745369 1317926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1210 00:32:21.764446 1317926 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1210 00:32:21.764534 1317926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1210 00:32:21.775309 1317926 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1210 00:32:21.775371 1317926 start.go:495] detecting cgroup driver to use...
I1210 00:32:21.775408 1317926 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1210 00:32:21.775470 1317926 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1210 00:32:21.789470 1317926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1210 00:32:21.801367 1317926 docker.go:217] disabling cri-docker service (if available) ...
I1210 00:32:21.801458 1317926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1210 00:32:21.814915 1317926 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1210 00:32:21.826469 1317926 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1210 00:32:21.920952 1317926 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1210 00:32:22.024001 1317926 docker.go:233] disabling docker service ...
I1210 00:32:22.024143 1317926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1210 00:32:22.038977 1317926 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1210 00:32:22.052527 1317926 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1210 00:32:22.143131 1317926 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1210 00:32:22.235285 1317926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1210 00:32:22.246525 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1210 00:32:22.264890 1317926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1210 00:32:22.275956 1317926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1210 00:32:22.287441 1317926 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1210 00:32:22.287556 1317926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1210 00:32:22.298393 1317926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1210 00:32:22.308448 1317926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1210 00:32:22.318909 1317926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1210 00:32:22.329175 1317926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1210 00:32:22.338811 1317926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1210 00:32:22.355729 1317926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1210 00:32:22.365510 1317926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1210 00:32:22.374116 1317926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 00:32:22.478306 1317926 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1210 00:32:22.696866 1317926 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1210 00:32:22.697022 1317926 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1210 00:32:22.701768 1317926 start.go:563] Will wait 60s for crictl version
I1210 00:32:22.701834 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:32:22.705149 1317926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1210 00:32:22.744955 1317926 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1210 00:32:22.745049 1317926 ssh_runner.go:195] Run: containerd --version
I1210 00:32:22.768849 1317926 ssh_runner.go:195] Run: containerd --version
I1210 00:32:22.799334 1317926 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
I1210 00:32:22.802322 1317926 cli_runner.go:164] Run: docker network inspect old-k8s-version-452467 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 00:32:22.818489 1317926 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1210 00:32:22.822269 1317926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1210 00:32:22.833645 1317926 kubeadm.go:883] updating cluster {Name:old-k8s-version-452467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-452467 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1210 00:32:22.833760 1317926 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1210 00:32:22.833815 1317926 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 00:32:22.879044 1317926 containerd.go:627] all images are preloaded for containerd runtime.
I1210 00:32:22.879078 1317926 containerd.go:534] Images already preloaded, skipping extraction
I1210 00:32:22.879155 1317926 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 00:32:22.916631 1317926 containerd.go:627] all images are preloaded for containerd runtime.
I1210 00:32:22.916656 1317926 cache_images.go:84] Images are preloaded, skipping loading
I1210 00:32:22.916665 1317926 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
I1210 00:32:22.916779 1317926 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-452467 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-452467 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1210 00:32:22.916849 1317926 ssh_runner.go:195] Run: sudo crictl info
I1210 00:32:22.956408 1317926 cni.go:84] Creating CNI manager for ""
I1210 00:32:22.956433 1317926 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1210 00:32:22.956445 1317926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1210 00:32:22.956465 1317926 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-452467 NodeName:old-k8s-version-452467 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1210 00:32:22.956629 1317926 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-452467"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1210 00:32:22.956709 1317926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I1210 00:32:22.966418 1317926 binaries.go:44] Found k8s binaries, skipping transfer
I1210 00:32:22.966537 1317926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1210 00:32:22.975663 1317926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I1210 00:32:22.998307 1317926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1210 00:32:23.022610 1317926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I1210 00:32:23.043475 1317926 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1210 00:32:23.047525 1317926 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1210 00:32:23.058553 1317926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 00:32:23.154570 1317926 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1210 00:32:23.169167 1317926 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467 for IP: 192.168.85.2
I1210 00:32:23.169189 1317926 certs.go:194] generating shared ca certs ...
I1210 00:32:23.169206 1317926 certs.go:226] acquiring lock for ca certs: {Name:mkd7f0f0a5f922d78bc3f70822a394d56641c333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:32:23.169365 1317926 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.key
I1210 00:32:23.169410 1317926 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/proxy-client-ca.key
I1210 00:32:23.169419 1317926 certs.go:256] generating profile certs ...
I1210 00:32:23.169508 1317926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/client.key
I1210 00:32:23.169571 1317926 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/apiserver.key.8c0e0260
I1210 00:32:23.169626 1317926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/proxy-client.key
I1210 00:32:23.169740 1317926 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/1108450.pem (1338 bytes)
W1210 00:32:23.169779 1317926 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/1108450_empty.pem, impossibly tiny 0 bytes
I1210 00:32:23.169792 1317926 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca-key.pem (1675 bytes)
I1210 00:32:23.169824 1317926 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem (1078 bytes)
I1210 00:32:23.169850 1317926 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/cert.pem (1123 bytes)
I1210 00:32:23.169876 1317926 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/key.pem (1679 bytes)
I1210 00:32:23.169928 1317926 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/files/etc/ssl/certs/11084502.pem (1708 bytes)
I1210 00:32:23.170597 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1210 00:32:23.200745 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1210 00:32:23.232646 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1210 00:32:23.260695 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1210 00:32:23.291277 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I1210 00:32:23.322297 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1210 00:32:23.348872 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1210 00:32:23.376380 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/old-k8s-version-452467/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1210 00:32:23.404217 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1210 00:32:23.429530 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/1108450.pem --> /usr/share/ca-certificates/1108450.pem (1338 bytes)
I1210 00:32:23.455285 1317926 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/files/etc/ssl/certs/11084502.pem --> /usr/share/ca-certificates/11084502.pem (1708 bytes)
I1210 00:32:23.479512 1317926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1210 00:32:23.500695 1317926 ssh_runner.go:195] Run: openssl version
I1210 00:32:23.509702 1317926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1210 00:32:23.520264 1317926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1210 00:32:23.523943 1317926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 9 23:44 /usr/share/ca-certificates/minikubeCA.pem
I1210 00:32:23.524015 1317926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1210 00:32:23.531870 1317926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1210 00:32:23.541143 1317926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1108450.pem && ln -fs /usr/share/ca-certificates/1108450.pem /etc/ssl/certs/1108450.pem"
I1210 00:32:23.550819 1317926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1108450.pem
I1210 00:32:23.554501 1317926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 9 23:52 /usr/share/ca-certificates/1108450.pem
I1210 00:32:23.554569 1317926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1108450.pem
I1210 00:32:23.561543 1317926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1108450.pem /etc/ssl/certs/51391683.0"
I1210 00:32:23.570604 1317926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11084502.pem && ln -fs /usr/share/ca-certificates/11084502.pem /etc/ssl/certs/11084502.pem"
I1210 00:32:23.580424 1317926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11084502.pem
I1210 00:32:23.583968 1317926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 9 23:52 /usr/share/ca-certificates/11084502.pem
I1210 00:32:23.584040 1317926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11084502.pem
I1210 00:32:23.591097 1317926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11084502.pem /etc/ssl/certs/3ec20f2e.0"
I1210 00:32:23.600268 1317926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1210 00:32:23.603822 1317926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1210 00:32:23.610623 1317926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1210 00:32:23.617507 1317926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1210 00:32:23.624237 1317926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1210 00:32:23.631944 1317926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1210 00:32:23.638850 1317926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1210 00:32:23.645626 1317926 kubeadm.go:392] StartCluster: {Name:old-k8s-version-452467 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-452467 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 00:32:23.645726 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1210 00:32:23.645796 1317926 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1210 00:32:23.691507 1317926 cri.go:89] found id: "1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:32:23.691529 1317926 cri.go:89] found id: "bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:32:23.691533 1317926 cri.go:89] found id: "f327862c7bb7b4ae8fb50f1252a5515afa7c55db5abb225b11baf574419eb60a"
I1210 00:32:23.691537 1317926 cri.go:89] found id: "d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:32:23.691541 1317926 cri.go:89] found id: "47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:32:23.691544 1317926 cri.go:89] found id: "f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:32:23.691547 1317926 cri.go:89] found id: "47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:32:23.691550 1317926 cri.go:89] found id: "6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:32:23.691553 1317926 cri.go:89] found id: ""
I1210 00:32:23.691610 1317926 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1210 00:32:23.704266 1317926 cri.go:116] JSON = null
W1210 00:32:23.704346 1317926 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I1210 00:32:23.704425 1317926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1210 00:32:23.713618 1317926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1210 00:32:23.713639 1317926 kubeadm.go:593] restartPrimaryControlPlane start ...
I1210 00:32:23.713690 1317926 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1210 00:32:23.722901 1317926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1210 00:32:23.723512 1317926 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-452467" does not appear in /home/jenkins/minikube-integration/20062-1103064/kubeconfig
I1210 00:32:23.723781 1317926 kubeconfig.go:62] /home/jenkins/minikube-integration/20062-1103064/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-452467" cluster setting kubeconfig missing "old-k8s-version-452467" context setting]
I1210 00:32:23.724287 1317926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/kubeconfig: {Name:mk16b3bebafd13eba97c6613264503f612c79652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:32:23.726899 1317926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1210 00:32:23.736981 1317926 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I1210 00:32:23.737014 1317926 kubeadm.go:597] duration metric: took 23.369054ms to restartPrimaryControlPlane
I1210 00:32:23.737023 1317926 kubeadm.go:394] duration metric: took 91.405625ms to StartCluster
I1210 00:32:23.737044 1317926 settings.go:142] acquiring lock: {Name:mke3cc88924e31971acedc51064f8e969ffa46f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:32:23.737109 1317926 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20062-1103064/kubeconfig
I1210 00:32:23.738074 1317926 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/kubeconfig: {Name:mk16b3bebafd13eba97c6613264503f612c79652 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:32:23.738275 1317926 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1210 00:32:23.738574 1317926 config.go:182] Loaded profile config "old-k8s-version-452467": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1210 00:32:23.738626 1317926 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1210 00:32:23.738746 1317926 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-452467"
I1210 00:32:23.738771 1317926 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-452467"
W1210 00:32:23.738778 1317926 addons.go:243] addon storage-provisioner should already be in state true
I1210 00:32:23.738801 1317926 host.go:66] Checking if "old-k8s-version-452467" exists ...
I1210 00:32:23.738826 1317926 addons.go:69] Setting dashboard=true in profile "old-k8s-version-452467"
I1210 00:32:23.738875 1317926 addons.go:234] Setting addon dashboard=true in "old-k8s-version-452467"
W1210 00:32:23.738900 1317926 addons.go:243] addon dashboard should already be in state true
I1210 00:32:23.738938 1317926 host.go:66] Checking if "old-k8s-version-452467" exists ...
I1210 00:32:23.739271 1317926 cli_runner.go:164] Run: docker container inspect old-k8s-version-452467 --format={{.State.Status}}
I1210 00:32:23.739498 1317926 cli_runner.go:164] Run: docker container inspect old-k8s-version-452467 --format={{.State.Status}}
I1210 00:32:23.739763 1317926 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-452467"
I1210 00:32:23.739787 1317926 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-452467"
W1210 00:32:23.739795 1317926 addons.go:243] addon metrics-server should already be in state true
I1210 00:32:23.739821 1317926 host.go:66] Checking if "old-k8s-version-452467" exists ...
I1210 00:32:23.740204 1317926 cli_runner.go:164] Run: docker container inspect old-k8s-version-452467 --format={{.State.Status}}
I1210 00:32:23.742634 1317926 out.go:177] * Verifying Kubernetes components...
I1210 00:32:23.738799 1317926 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-452467"
I1210 00:32:23.742938 1317926 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-452467"
I1210 00:32:23.743250 1317926 cli_runner.go:164] Run: docker container inspect old-k8s-version-452467 --format={{.State.Status}}
I1210 00:32:23.749398 1317926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 00:32:23.782523 1317926 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1210 00:32:23.789229 1317926 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1210 00:32:23.792266 1317926 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1210 00:32:23.792297 1317926 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1210 00:32:23.792364 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:23.797935 1317926 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1210 00:32:23.801587 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1210 00:32:23.801618 1317926 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1210 00:32:23.801682 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:23.807756 1317926 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1210 00:32:23.810768 1317926 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1210 00:32:23.810791 1317926 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1210 00:32:23.810864 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:23.814527 1317926 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-452467"
W1210 00:32:23.814549 1317926 addons.go:243] addon default-storageclass should already be in state true
I1210 00:32:23.814576 1317926 host.go:66] Checking if "old-k8s-version-452467" exists ...
I1210 00:32:23.814969 1317926 cli_runner.go:164] Run: docker container inspect old-k8s-version-452467 --format={{.State.Status}}
I1210 00:32:23.853490 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:23.861698 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:23.891172 1317926 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1210 00:32:23.891194 1317926 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1210 00:32:23.891254 1317926 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-452467
I1210 00:32:23.892862 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:23.926409 1317926 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34523 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/old-k8s-version-452467/id_rsa Username:docker}
I1210 00:32:23.928518 1317926 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1210 00:32:23.966262 1317926 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-452467" to be "Ready" ...
I1210 00:32:24.004078 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1210 00:32:24.040150 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1210 00:32:24.040174 1317926 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1210 00:32:24.057715 1317926 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1210 00:32:24.057744 1317926 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1210 00:32:24.063060 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1210 00:32:24.098564 1317926 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1210 00:32:24.098642 1317926 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1210 00:32:24.120231 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1210 00:32:24.120347 1317926 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1210 00:32:24.153406 1317926 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1210 00:32:24.153482 1317926 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1210 00:32:24.173437 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1210 00:32:24.180353 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.180481 1317926 retry.go:31] will retry after 227.76452ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.193214 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1210 00:32:24.193293 1317926 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
W1210 00:32:24.226480 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.226562 1317926 retry.go:31] will retry after 363.934384ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.230586 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1210 00:32:24.230652 1317926 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1210 00:32:24.250576 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1210 00:32:24.250652 1317926 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1210 00:32:24.270352 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1210 00:32:24.270421 1317926 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1210 00:32:24.292087 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1210 00:32:24.292162 1317926 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
W1210 00:32:24.298712 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.298761 1317926 retry.go:31] will retry after 363.933408ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.312624 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1210 00:32:24.312655 1317926 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1210 00:32:24.331441 1317926 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1210 00:32:24.331520 1317926 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1210 00:32:24.349872 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1210 00:32:24.409149 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1210 00:32:24.427601 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.427641 1317926 retry.go:31] will retry after 183.953653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:24.496795 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.496833 1317926 retry.go:31] will retry after 509.975888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.591164 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1210 00:32:24.612578 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1210 00:32:24.663188 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1210 00:32:24.708033 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.708116 1317926 retry.go:31] will retry after 539.912184ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:24.735481 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.735575 1317926 retry.go:31] will retry after 187.812604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:24.775525 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.775565 1317926 retry.go:31] will retry after 240.961928ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:24.923991 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1210 00:32:25.007134 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1210 00:32:25.007241 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.007282 1317926 retry.go:31] will retry after 327.837906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.017058 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1210 00:32:25.131327 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.131371 1317926 retry.go:31] will retry after 458.997905ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:25.131412 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.131430 1317926 retry.go:31] will retry after 830.359735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.248586 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1210 00:32:25.317233 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.317269 1317926 retry.go:31] will retry after 771.067571ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.335415 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1210 00:32:25.407135 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.407165 1317926 retry.go:31] will retry after 956.848555ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.591141 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1210 00:32:25.655945 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.655986 1317926 retry.go:31] will retry after 1.004953882s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:25.962492 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1210 00:32:25.967093 1317926 node_ready.go:53] error getting node "old-k8s-version-452467": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-452467": dial tcp 192.168.85.2:8443: connect: connection refused
W1210 00:32:26.052262 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:26.052298 1317926 retry.go:31] will retry after 576.415926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:26.089509 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1210 00:32:26.155473 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:26.155505 1317926 retry.go:31] will retry after 903.400411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:26.364866 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1210 00:32:26.436872 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:26.436916 1317926 retry.go:31] will retry after 1.727919692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:26.629911 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1210 00:32:26.661252 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1210 00:32:26.704460 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:26.704544 1317926 retry.go:31] will retry after 1.487173389s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:26.754252 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:26.754334 1317926 retry.go:31] will retry after 1.812277779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:27.059095 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1210 00:32:27.137304 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:27.137368 1317926 retry.go:31] will retry after 1.142080357s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:28.165053 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1210 00:32:28.192561 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1210 00:32:28.279749 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1210 00:32:28.343162 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:28.343240 1317926 retry.go:31] will retry after 2.22576061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:28.407028 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:28.407106 1317926 retry.go:31] will retry after 2.109615194s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:28.439585 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:28.439620 1317926 retry.go:31] will retry after 1.726981421s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:28.467137 1317926 node_ready.go:53] error getting node "old-k8s-version-452467": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-452467": dial tcp 192.168.85.2:8443: connect: connection refused
I1210 00:32:28.567540 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1210 00:32:28.639070 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:28.639104 1317926 retry.go:31] will retry after 1.468212415s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:30.107986 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1210 00:32:30.166965 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1210 00:32:30.188852 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:30.188920 1317926 retry.go:31] will retry after 3.698726058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:30.248537 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:30.248585 1317926 retry.go:31] will retry after 1.589612586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:30.467204 1317926 node_ready.go:53] error getting node "old-k8s-version-452467": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-452467": dial tcp 192.168.85.2:8443: connect: connection refused
I1210 00:32:30.517426 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1210 00:32:30.569760 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1210 00:32:30.594827 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:30.594857 1317926 retry.go:31] will retry after 3.133212861s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1210 00:32:30.657275 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:30.657306 1317926 retry.go:31] will retry after 2.716190041s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:31.839346 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1210 00:32:31.926783 1317926 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:31.926816 1317926 retry.go:31] will retry after 4.599812678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1210 00:32:33.374530 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1210 00:32:33.729166 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1210 00:32:33.888582 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1210 00:32:36.527076 1317926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1210 00:32:40.174066 1317926 node_ready.go:49] node "old-k8s-version-452467" has status "Ready":"True"
I1210 00:32:40.174176 1317926 node_ready.go:38] duration metric: took 16.207831654s for node "old-k8s-version-452467" to be "Ready" ...
I1210 00:32:40.174192 1317926 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1210 00:32:40.335411 1317926 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-zv627" in "kube-system" namespace to be "Ready" ...
I1210 00:32:40.678666 1317926 pod_ready.go:93] pod "coredns-74ff55c5b-zv627" in "kube-system" namespace has status "Ready":"True"
I1210 00:32:40.678693 1317926 pod_ready.go:82] duration metric: took 343.192016ms for pod "coredns-74ff55c5b-zv627" in "kube-system" namespace to be "Ready" ...
I1210 00:32:40.678710 1317926 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-452467" in "kube-system" namespace to be "Ready" ...
I1210 00:32:40.871564 1317926 pod_ready.go:93] pod "etcd-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"True"
I1210 00:32:40.871593 1317926 pod_ready.go:82] duration metric: took 192.875742ms for pod "etcd-old-k8s-version-452467" in "kube-system" namespace to be "Ready" ...
I1210 00:32:40.871609 1317926 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-452467" in "kube-system" namespace to be "Ready" ...
I1210 00:32:42.852176 1317926 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.477593161s)
I1210 00:32:42.852445 1317926 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.123239181s)
I1210 00:32:42.852466 1317926 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-452467"
I1210 00:32:42.852524 1317926 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.963920411s)
I1210 00:32:42.852554 1317926 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.325456014s)
I1210 00:32:42.855321 1317926 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-452467 addons enable metrics-server
I1210 00:32:42.877989 1317926 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
I1210 00:32:42.881212 1317926 addons.go:510] duration metric: took 19.142578326s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
I1210 00:32:42.903591 1317926 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:32:45.378469 1317926 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:32:47.879694 1317926 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:32:50.379552 1317926 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:32:52.879536 1317926 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"True"
I1210 00:32:52.879564 1317926 pod_ready.go:82] duration metric: took 12.007946909s for pod "kube-apiserver-old-k8s-version-452467" in "kube-system" namespace to be "Ready" ...
I1210 00:32:52.879577 1317926 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace to be "Ready" ...
I1210 00:32:54.887782 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:32:56.893193 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:32:59.395356 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:01.888511 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:03.895111 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:06.410784 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:08.888153 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:10.897958 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:13.386220 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:15.386862 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:17.387485 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:19.389186 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:21.887114 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:24.386520 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:26.386696 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:28.896466 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:31.387041 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:33.887361 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:36.385799 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:38.386792 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:40.885792 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:43.385986 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:45.387568 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:47.388576 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:49.885710 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:51.885824 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:54.386248 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:56.387025 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:33:58.885701 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:00.886185 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:02.886427 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:05.386135 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:07.386736 1317926 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:07.886639 1317926 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"True"
I1210 00:34:07.886676 1317926 pod_ready.go:82] duration metric: took 1m15.007090313s for pod "kube-controller-manager-old-k8s-version-452467" in "kube-system" namespace to be "Ready" ...
I1210 00:34:07.886690 1317926 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-brbcq" in "kube-system" namespace to be "Ready" ...
I1210 00:34:07.892494 1317926 pod_ready.go:93] pod "kube-proxy-brbcq" in "kube-system" namespace has status "Ready":"True"
I1210 00:34:07.892521 1317926 pod_ready.go:82] duration metric: took 5.823335ms for pod "kube-proxy-brbcq" in "kube-system" namespace to be "Ready" ...
I1210 00:34:07.892534 1317926 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-452467" in "kube-system" namespace to be "Ready" ...
I1210 00:34:07.897752 1317926 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-452467" in "kube-system" namespace has status "Ready":"True"
I1210 00:34:07.897776 1317926 pod_ready.go:82] duration metric: took 5.233155ms for pod "kube-scheduler-old-k8s-version-452467" in "kube-system" namespace to be "Ready" ...
I1210 00:34:07.897789 1317926 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace to be "Ready" ...
I1210 00:34:09.910093 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:12.404393 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:14.904485 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:17.404880 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:19.903780 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:21.903936 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:23.904131 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:25.904450 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:28.404826 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:30.903711 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:32.903988 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:34.905270 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:37.404403 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:39.404735 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:41.904031 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:43.904246 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:45.904320 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:48.404401 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:50.405051 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:52.904359 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:54.904706 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:57.404476 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:34:59.405000 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:01.405715 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:03.406508 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:05.903860 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:07.904052 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:10.403865 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:12.405042 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:14.904483 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:17.403873 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:19.404293 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:21.426424 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:23.904719 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:26.404693 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:28.904737 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:31.403366 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:33.403588 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:35.903956 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:38.404602 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:40.904247 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:43.403847 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:45.404611 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:47.904407 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:49.904557 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:51.907594 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:54.405610 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:56.904677 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:35:59.404134 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:01.405185 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:03.904685 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:06.403940 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:08.404388 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:10.904111 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:12.904197 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:15.403527 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:17.404543 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:19.903408 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:21.950344 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:24.404273 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:26.404546 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:28.404723 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:30.906356 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:33.404727 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:35.404875 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:37.405059 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:39.904321 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:41.904869 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:44.403814 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:46.903991 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:49.403804 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:51.403840 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:53.904248 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:56.406373 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:36:58.903796 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:00.904876 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:03.404350 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:05.903417 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:07.904591 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:09.905280 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:12.461712 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:14.904228 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:16.904717 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:19.404051 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:21.903974 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:23.904014 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:25.904945 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:28.404177 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:30.410961 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:32.905751 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:35.404809 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:37.405534 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:39.904447 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:41.909286 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:44.404430 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:46.904694 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:49.404210 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:51.904907 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:54.404063 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:56.404163 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:37:58.404397 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:38:00.916273 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:38:02.951478 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:38:05.406841 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:38:07.905845 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:38:07.905872 1317926 pod_ready.go:82] duration metric: took 4m0.008075913s for pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace to be "Ready" ...
E1210 00:38:07.905884 1317926 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1210 00:38:07.905892 1317926 pod_ready.go:39] duration metric: took 5m27.731687929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1210 00:38:07.905907 1317926 api_server.go:52] waiting for apiserver process to appear ...
I1210 00:38:07.905936 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1210 00:38:07.905996 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1210 00:38:07.959282 1317926 cri.go:89] found id: "d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d"
I1210 00:38:07.959302 1317926 cri.go:89] found id: "47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:38:07.959307 1317926 cri.go:89] found id: ""
I1210 00:38:07.959314 1317926 logs.go:282] 2 containers: [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d 47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85]
I1210 00:38:07.959385 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:07.963458 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:07.967452 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1210 00:38:07.967524 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1210 00:38:08.015124 1317926 cri.go:89] found id: "4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4"
I1210 00:38:08.015146 1317926 cri.go:89] found id: "f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:38:08.015151 1317926 cri.go:89] found id: ""
I1210 00:38:08.015158 1317926 logs.go:282] 2 containers: [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4 f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76]
I1210 00:38:08.015220 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.019565 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.023561 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1210 00:38:08.023677 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1210 00:38:08.073248 1317926 cri.go:89] found id: "8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051"
I1210 00:38:08.073324 1317926 cri.go:89] found id: "1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:38:08.073373 1317926 cri.go:89] found id: ""
I1210 00:38:08.073398 1317926 logs.go:282] 2 containers: [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051 1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154]
I1210 00:38:08.073478 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.077764 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.081739 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1210 00:38:08.081857 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1210 00:38:08.144612 1317926 cri.go:89] found id: "c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077"
I1210 00:38:08.144694 1317926 cri.go:89] found id: "47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:38:08.144713 1317926 cri.go:89] found id: ""
I1210 00:38:08.144737 1317926 logs.go:282] 2 containers: [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077 47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869]
I1210 00:38:08.144827 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.149481 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.153535 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1210 00:38:08.153660 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1210 00:38:08.203599 1317926 cri.go:89] found id: "a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e"
I1210 00:38:08.203674 1317926 cri.go:89] found id: "d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:38:08.203692 1317926 cri.go:89] found id: ""
I1210 00:38:08.203717 1317926 logs.go:282] 2 containers: [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717]
I1210 00:38:08.203799 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.208045 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.211792 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1210 00:38:08.211911 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1210 00:38:08.255984 1317926 cri.go:89] found id: "dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0"
I1210 00:38:08.256058 1317926 cri.go:89] found id: "6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:38:08.256077 1317926 cri.go:89] found id: ""
I1210 00:38:08.256101 1317926 logs.go:282] 2 containers: [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0 6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21]
I1210 00:38:08.256184 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.260301 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.263995 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1210 00:38:08.264125 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1210 00:38:08.315154 1317926 cri.go:89] found id: "eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469"
I1210 00:38:08.315229 1317926 cri.go:89] found id: "bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:38:08.315248 1317926 cri.go:89] found id: ""
I1210 00:38:08.315272 1317926 logs.go:282] 2 containers: [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469 bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e]
I1210 00:38:08.315355 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.319457 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.323233 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1210 00:38:08.323357 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1210 00:38:08.371319 1317926 cri.go:89] found id: "07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6"
I1210 00:38:08.371394 1317926 cri.go:89] found id: ""
I1210 00:38:08.371417 1317926 logs.go:282] 1 containers: [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6]
I1210 00:38:08.371509 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.375493 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1210 00:38:08.375616 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1210 00:38:08.429911 1317926 cri.go:89] found id: "c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488"
I1210 00:38:08.429985 1317926 cri.go:89] found id: "7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41"
I1210 00:38:08.430004 1317926 cri.go:89] found id: ""
I1210 00:38:08.430029 1317926 logs.go:282] 2 containers: [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488 7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41]
I1210 00:38:08.430126 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.435122 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.439326 1317926 logs.go:123] Gathering logs for kindnet [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469] ...
I1210 00:38:08.439399 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469"
I1210 00:38:08.492661 1317926 logs.go:123] Gathering logs for kindnet [bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e] ...
I1210 00:38:08.492815 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:38:08.542139 1317926 logs.go:123] Gathering logs for kube-apiserver [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d] ...
I1210 00:38:08.542223 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d"
I1210 00:38:08.617414 1317926 logs.go:123] Gathering logs for etcd [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4] ...
I1210 00:38:08.617487 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4"
I1210 00:38:08.667735 1317926 logs.go:123] Gathering logs for coredns [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051] ...
I1210 00:38:08.667809 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051"
I1210 00:38:08.724724 1317926 logs.go:123] Gathering logs for kube-scheduler [47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869] ...
I1210 00:38:08.724798 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:38:08.780190 1317926 logs.go:123] Gathering logs for kube-proxy [d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717] ...
I1210 00:38:08.780378 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:38:08.835091 1317926 logs.go:123] Gathering logs for kube-controller-manager [6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21] ...
I1210 00:38:08.835116 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:38:08.931488 1317926 logs.go:123] Gathering logs for dmesg ...
I1210 00:38:08.931580 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1210 00:38:08.951636 1317926 logs.go:123] Gathering logs for coredns [1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154] ...
I1210 00:38:08.951659 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:38:09.026506 1317926 logs.go:123] Gathering logs for kube-scheduler [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077] ...
I1210 00:38:09.026532 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077"
I1210 00:38:09.087870 1317926 logs.go:123] Gathering logs for kubernetes-dashboard [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6] ...
I1210 00:38:09.087953 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6"
I1210 00:38:09.162884 1317926 logs.go:123] Gathering logs for storage-provisioner [7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41] ...
I1210 00:38:09.162917 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41"
I1210 00:38:09.224509 1317926 logs.go:123] Gathering logs for describe nodes ...
I1210 00:38:09.224616 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1210 00:38:09.398154 1317926 logs.go:123] Gathering logs for kube-apiserver [47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85] ...
I1210 00:38:09.398187 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:38:09.463255 1317926 logs.go:123] Gathering logs for containerd ...
I1210 00:38:09.463295 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1210 00:38:09.532839 1317926 logs.go:123] Gathering logs for container status ...
I1210 00:38:09.532875 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1210 00:38:09.597269 1317926 logs.go:123] Gathering logs for kubelet ...
I1210 00:38:09.597457 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1210 00:38:09.658243 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.134990 660 reflector.go:138] object-"kube-system"/"kindnet-token-jppl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jppl8" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.658519 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135109 660 reflector.go:138] object-"default"/"default-token-cjfvv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-cjfvv" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.658759 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135173 660 reflector.go:138] object-"kube-system"/"kube-proxy-token-kh82p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kh82p" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659016 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135229 660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-t88lj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-t88lj" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659266 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135383 660 reflector.go:138] object-"kube-system"/"metrics-server-token-6t6sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6t6sh" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659512 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135444 660 reflector.go:138] object-"kube-system"/"coredns-token-ltmzn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ltmzn" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659735 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135505 660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659961 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135561 660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.669433 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:44 old-k8s-version-452467 kubelet[660]: E1210 00:32:44.318795 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.669672 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:44 old-k8s-version-452467 kubelet[660]: E1210 00:32:44.966265 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.673687 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:59 old-k8s-version-452467 kubelet[660]: E1210 00:32:59.771822 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.674593 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:13 old-k8s-version-452467 kubelet[660]: E1210 00:33:13.748779 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.675061 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:14 old-k8s-version-452467 kubelet[660]: E1210 00:33:14.246302 660 pod_workers.go:191] Error syncing pod 7cbccf04-1bab-4d0f-b6b7-06642841c9ad ("storage-provisioner_kube-system(7cbccf04-1bab-4d0f-b6b7-06642841c9ad)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7cbccf04-1bab-4d0f-b6b7-06642841c9ad)"
W1210 00:38:09.675681 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:15 old-k8s-version-452467 kubelet[660]: E1210 00:33:15.274011 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.676046 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:16 old-k8s-version-452467 kubelet[660]: E1210 00:33:16.278186 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.676743 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:22 old-k8s-version-452467 kubelet[660]: E1210 00:33:22.652549 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.679222 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:25 old-k8s-version-452467 kubelet[660]: E1210 00:33:25.747166 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.679966 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:36 old-k8s-version-452467 kubelet[660]: E1210 00:33:36.343317 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.680174 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:37 old-k8s-version-452467 kubelet[660]: E1210 00:33:37.723434 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.680528 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:42 old-k8s-version-452467 kubelet[660]: E1210 00:33:42.652410 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.680741 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:49 old-k8s-version-452467 kubelet[660]: E1210 00:33:49.723131 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.681124 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:53 old-k8s-version-452467 kubelet[660]: E1210 00:33:53.722786 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.681333 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:02 old-k8s-version-452467 kubelet[660]: E1210 00:34:02.722788 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.682058 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:08 old-k8s-version-452467 kubelet[660]: E1210 00:34:08.441533 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.682414 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:12 old-k8s-version-452467 kubelet[660]: E1210 00:34:12.653097 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.684941 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:13 old-k8s-version-452467 kubelet[660]: E1210 00:34:13.735304 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.685301 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:25 old-k8s-version-452467 kubelet[660]: E1210 00:34:25.726049 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.685532 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:26 old-k8s-version-452467 kubelet[660]: E1210 00:34:26.722773 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.685742 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:37 old-k8s-version-452467 kubelet[660]: E1210 00:34:37.726330 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.686125 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:37 old-k8s-version-452467 kubelet[660]: E1210 00:34:37.732566 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.686332 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:48 old-k8s-version-452467 kubelet[660]: E1210 00:34:48.723555 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.686951 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:52 old-k8s-version-452467 kubelet[660]: E1210 00:34:52.558062 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.687302 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:53 old-k8s-version-452467 kubelet[660]: E1210 00:34:53.558808 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.687514 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:03 old-k8s-version-452467 kubelet[660]: E1210 00:35:03.722557 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.687868 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:06 old-k8s-version-452467 kubelet[660]: E1210 00:35:06.722239 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.688082 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:18 old-k8s-version-452467 kubelet[660]: E1210 00:35:18.722467 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.688434 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:19 old-k8s-version-452467 kubelet[660]: E1210 00:35:19.722179 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.688647 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:31 old-k8s-version-452467 kubelet[660]: E1210 00:35:31.727727 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.689027 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:32 old-k8s-version-452467 kubelet[660]: E1210 00:35:32.722255 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.691682 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:42 old-k8s-version-452467 kubelet[660]: E1210 00:35:42.732073 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.692088 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:43 old-k8s-version-452467 kubelet[660]: E1210 00:35:43.722214 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.692303 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:53 old-k8s-version-452467 kubelet[660]: E1210 00:35:53.730978 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.692654 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:56 old-k8s-version-452467 kubelet[660]: E1210 00:35:56.722173 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.692866 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:07 old-k8s-version-452467 kubelet[660]: E1210 00:36:07.722744 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.693239 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:10 old-k8s-version-452467 kubelet[660]: E1210 00:36:10.722185 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.693458 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:21 old-k8s-version-452467 kubelet[660]: E1210 00:36:21.723017 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.694157 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:24 old-k8s-version-452467 kubelet[660]: E1210 00:36:24.846395 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.694546 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:32 old-k8s-version-452467 kubelet[660]: E1210 00:36:32.652988 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.694797 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:34 old-k8s-version-452467 kubelet[660]: E1210 00:36:34.722539 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.695149 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:45 old-k8s-version-452467 kubelet[660]: E1210 00:36:45.725415 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.695363 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:49 old-k8s-version-452467 kubelet[660]: E1210 00:36:49.722539 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.695704 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722584 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.695929 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722789 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.696145 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:11 old-k8s-version-452467 kubelet[660]: E1210 00:37:11.722844 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.696492 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:15 old-k8s-version-452467 kubelet[660]: E1210 00:37:15.725496 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.696706 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:25 old-k8s-version-452467 kubelet[660]: E1210 00:37:25.722996 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.697070 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:29 old-k8s-version-452467 kubelet[660]: E1210 00:37:29.722673 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.697441 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722443 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.697652 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722952 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.697858 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.722722 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.698226 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.698578 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.698785 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1210 00:38:09.698818 1317926 logs.go:123] Gathering logs for etcd [f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76] ...
I1210 00:38:09.698854 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:38:09.762841 1317926 logs.go:123] Gathering logs for kube-proxy [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e] ...
I1210 00:38:09.762925 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e"
I1210 00:38:09.816292 1317926 logs.go:123] Gathering logs for kube-controller-manager [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0] ...
I1210 00:38:09.816369 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0"
I1210 00:38:09.893094 1317926 logs.go:123] Gathering logs for storage-provisioner [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488] ...
I1210 00:38:09.893197 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488"
I1210 00:38:09.946084 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:09.946116 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1210 00:38:09.946187 1317926 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1210 00:38:09.946206 1317926 out.go:270] Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722952 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722952 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.946212 1317926 out.go:270] Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.722722 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.722722 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.946225 1317926 out.go:270] Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.946250 1317926 out.go:270] Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.946284 1317926 out.go:270] Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1210 00:38:09.946290 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:09.946295 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:38:19.946739 1317926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 00:38:19.959371 1317926 api_server.go:72] duration metric: took 5m56.221058206s to wait for apiserver process to appear ...
I1210 00:38:19.959394 1317926 api_server.go:88] waiting for apiserver healthz status ...
I1210 00:38:19.959429 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1210 00:38:19.959490 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1210 00:38:20.010592 1317926 cri.go:89] found id: "d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d"
I1210 00:38:20.010617 1317926 cri.go:89] found id: "47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:38:20.010622 1317926 cri.go:89] found id: ""
I1210 00:38:20.010630 1317926 logs.go:282] 2 containers: [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d 47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85]
I1210 00:38:20.010698 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.016138 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.020971 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1210 00:38:20.021101 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1210 00:38:20.081403 1317926 cri.go:89] found id: "4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4"
I1210 00:38:20.081424 1317926 cri.go:89] found id: "f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:38:20.081429 1317926 cri.go:89] found id: ""
I1210 00:38:20.081436 1317926 logs.go:282] 2 containers: [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4 f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76]
I1210 00:38:20.081501 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.086290 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.091233 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1210 00:38:20.091381 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1210 00:38:20.149418 1317926 cri.go:89] found id: "8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051"
I1210 00:38:20.149499 1317926 cri.go:89] found id: "1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:38:20.149518 1317926 cri.go:89] found id: ""
I1210 00:38:20.149548 1317926 logs.go:282] 2 containers: [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051 1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154]
I1210 00:38:20.149657 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.158679 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.167569 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1210 00:38:20.167748 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1210 00:38:20.239974 1317926 cri.go:89] found id: "c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077"
I1210 00:38:20.240036 1317926 cri.go:89] found id: "47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:38:20.240063 1317926 cri.go:89] found id: ""
I1210 00:38:20.240086 1317926 logs.go:282] 2 containers: [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077 47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869]
I1210 00:38:20.240172 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.244686 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.248645 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1210 00:38:20.248717 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1210 00:38:20.305795 1317926 cri.go:89] found id: "a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e"
I1210 00:38:20.305814 1317926 cri.go:89] found id: "d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:38:20.305819 1317926 cri.go:89] found id: ""
I1210 00:38:20.305826 1317926 logs.go:282] 2 containers: [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717]
I1210 00:38:20.305885 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.309984 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.314071 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1210 00:38:20.314146 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1210 00:38:20.364110 1317926 cri.go:89] found id: "dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0"
I1210 00:38:20.364130 1317926 cri.go:89] found id: "6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:38:20.364146 1317926 cri.go:89] found id: ""
I1210 00:38:20.364153 1317926 logs.go:282] 2 containers: [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0 6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21]
I1210 00:38:20.364210 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.368655 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.373007 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1210 00:38:20.373132 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1210 00:38:20.421181 1317926 cri.go:89] found id: "eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469"
I1210 00:38:20.421255 1317926 cri.go:89] found id: "bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:38:20.421288 1317926 cri.go:89] found id: ""
I1210 00:38:20.421316 1317926 logs.go:282] 2 containers: [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469 bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e]
I1210 00:38:20.421412 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.425659 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.429627 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1210 00:38:20.429781 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1210 00:38:20.478682 1317926 cri.go:89] found id: "07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6"
I1210 00:38:20.478742 1317926 cri.go:89] found id: ""
I1210 00:38:20.478771 1317926 logs.go:282] 1 containers: [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6]
I1210 00:38:20.478860 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.483107 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1210 00:38:20.483228 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1210 00:38:20.548056 1317926 cri.go:89] found id: "c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488"
I1210 00:38:20.548129 1317926 cri.go:89] found id: "7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41"
I1210 00:38:20.548149 1317926 cri.go:89] found id: ""
I1210 00:38:20.548173 1317926 logs.go:282] 2 containers: [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488 7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41]
I1210 00:38:20.548261 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.552522 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.556568 1317926 logs.go:123] Gathering logs for kube-scheduler [47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869] ...
I1210 00:38:20.556645 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:38:20.633390 1317926 logs.go:123] Gathering logs for kube-proxy [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e] ...
I1210 00:38:20.633461 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e"
I1210 00:38:20.688605 1317926 logs.go:123] Gathering logs for kindnet [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469] ...
I1210 00:38:20.688633 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469"
I1210 00:38:20.743156 1317926 logs.go:123] Gathering logs for kindnet [bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e] ...
I1210 00:38:20.743229 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:38:20.792834 1317926 logs.go:123] Gathering logs for kubernetes-dashboard [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6] ...
I1210 00:38:20.792860 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6"
I1210 00:38:20.839415 1317926 logs.go:123] Gathering logs for storage-provisioner [7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41] ...
I1210 00:38:20.839448 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41"
I1210 00:38:20.886158 1317926 logs.go:123] Gathering logs for coredns [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051] ...
I1210 00:38:20.886182 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051"
I1210 00:38:20.932316 1317926 logs.go:123] Gathering logs for kube-scheduler [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077] ...
I1210 00:38:20.932339 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077"
I1210 00:38:20.979376 1317926 logs.go:123] Gathering logs for containerd ...
I1210 00:38:20.979401 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1210 00:38:21.052205 1317926 logs.go:123] Gathering logs for etcd [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4] ...
I1210 00:38:21.052243 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4"
I1210 00:38:21.169258 1317926 logs.go:123] Gathering logs for describe nodes ...
I1210 00:38:21.169454 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1210 00:38:21.369081 1317926 logs.go:123] Gathering logs for kube-apiserver [47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85] ...
I1210 00:38:21.369170 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:38:21.463157 1317926 logs.go:123] Gathering logs for etcd [f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76] ...
I1210 00:38:21.463230 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:38:21.538927 1317926 logs.go:123] Gathering logs for kube-proxy [d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717] ...
I1210 00:38:21.538957 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:38:21.592235 1317926 logs.go:123] Gathering logs for kube-controller-manager [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0] ...
I1210 00:38:21.592266 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0"
I1210 00:38:21.675032 1317926 logs.go:123] Gathering logs for kube-controller-manager [6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21] ...
I1210 00:38:21.675072 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:38:21.753859 1317926 logs.go:123] Gathering logs for storage-provisioner [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488] ...
I1210 00:38:21.753901 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488"
I1210 00:38:21.804424 1317926 logs.go:123] Gathering logs for kubelet ...
I1210 00:38:21.804453 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1210 00:38:21.881620 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.134990 660 reflector.go:138] object-"kube-system"/"kindnet-token-jppl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jppl8" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.881878 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135109 660 reflector.go:138] object-"default"/"default-token-cjfvv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-cjfvv" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.882149 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135173 660 reflector.go:138] object-"kube-system"/"kube-proxy-token-kh82p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kh82p" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.882424 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135229 660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-t88lj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-t88lj" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.882731 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135383 660 reflector.go:138] object-"kube-system"/"metrics-server-token-6t6sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6t6sh" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.882972 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135444 660 reflector.go:138] object-"kube-system"/"coredns-token-ltmzn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ltmzn" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.883199 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135505 660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.883437 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135561 660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.896238 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:44 old-k8s-version-452467 kubelet[660]: E1210 00:32:44.318795 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.896482 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:44 old-k8s-version-452467 kubelet[660]: E1210 00:32:44.966265 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.900533 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:59 old-k8s-version-452467 kubelet[660]: E1210 00:32:59.771822 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.901421 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:13 old-k8s-version-452467 kubelet[660]: E1210 00:33:13.748779 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.901869 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:14 old-k8s-version-452467 kubelet[660]: E1210 00:33:14.246302 660 pod_workers.go:191] Error syncing pod 7cbccf04-1bab-4d0f-b6b7-06642841c9ad ("storage-provisioner_kube-system(7cbccf04-1bab-4d0f-b6b7-06642841c9ad)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7cbccf04-1bab-4d0f-b6b7-06642841c9ad)"
W1210 00:38:21.902467 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:15 old-k8s-version-452467 kubelet[660]: E1210 00:33:15.274011 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.902797 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:16 old-k8s-version-452467 kubelet[660]: E1210 00:33:16.278186 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.903467 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:22 old-k8s-version-452467 kubelet[660]: E1210 00:33:22.652549 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.906013 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:25 old-k8s-version-452467 kubelet[660]: E1210 00:33:25.747166 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.906783 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:36 old-k8s-version-452467 kubelet[660]: E1210 00:33:36.343317 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.906996 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:37 old-k8s-version-452467 kubelet[660]: E1210 00:33:37.723434 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.907348 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:42 old-k8s-version-452467 kubelet[660]: E1210 00:33:42.652410 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.907556 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:49 old-k8s-version-452467 kubelet[660]: E1210 00:33:49.723131 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.907908 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:53 old-k8s-version-452467 kubelet[660]: E1210 00:33:53.722786 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.908318 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:02 old-k8s-version-452467 kubelet[660]: E1210 00:34:02.722788 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.908973 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:08 old-k8s-version-452467 kubelet[660]: E1210 00:34:08.441533 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.909381 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:12 old-k8s-version-452467 kubelet[660]: E1210 00:34:12.653097 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.911917 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:13 old-k8s-version-452467 kubelet[660]: E1210 00:34:13.735304 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.912302 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:25 old-k8s-version-452467 kubelet[660]: E1210 00:34:25.726049 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.912544 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:26 old-k8s-version-452467 kubelet[660]: E1210 00:34:26.722773 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.912777 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:37 old-k8s-version-452467 kubelet[660]: E1210 00:34:37.726330 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.913222 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:37 old-k8s-version-452467 kubelet[660]: E1210 00:34:37.732566 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.913483 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:48 old-k8s-version-452467 kubelet[660]: E1210 00:34:48.723555 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.914779 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:52 old-k8s-version-452467 kubelet[660]: E1210 00:34:52.558062 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.915148 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:53 old-k8s-version-452467 kubelet[660]: E1210 00:34:53.558808 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.915365 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:03 old-k8s-version-452467 kubelet[660]: E1210 00:35:03.722557 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.915718 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:06 old-k8s-version-452467 kubelet[660]: E1210 00:35:06.722239 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.915929 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:18 old-k8s-version-452467 kubelet[660]: E1210 00:35:18.722467 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.916281 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:19 old-k8s-version-452467 kubelet[660]: E1210 00:35:19.722179 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.916489 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:31 old-k8s-version-452467 kubelet[660]: E1210 00:35:31.727727 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.916852 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:32 old-k8s-version-452467 kubelet[660]: E1210 00:35:32.722255 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.923582 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:42 old-k8s-version-452467 kubelet[660]: E1210 00:35:42.732073 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.923968 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:43 old-k8s-version-452467 kubelet[660]: E1210 00:35:43.722214 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.924180 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:53 old-k8s-version-452467 kubelet[660]: E1210 00:35:53.730978 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.924537 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:56 old-k8s-version-452467 kubelet[660]: E1210 00:35:56.722173 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.924755 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:07 old-k8s-version-452467 kubelet[660]: E1210 00:36:07.722744 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.925160 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:10 old-k8s-version-452467 kubelet[660]: E1210 00:36:10.722185 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.925383 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:21 old-k8s-version-452467 kubelet[660]: E1210 00:36:21.723017 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.926007 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:24 old-k8s-version-452467 kubelet[660]: E1210 00:36:24.846395 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.926368 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:32 old-k8s-version-452467 kubelet[660]: E1210 00:36:32.652988 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.926576 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:34 old-k8s-version-452467 kubelet[660]: E1210 00:36:34.722539 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.926931 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:45 old-k8s-version-452467 kubelet[660]: E1210 00:36:45.725415 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.927145 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:49 old-k8s-version-452467 kubelet[660]: E1210 00:36:49.722539 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.927485 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722584 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.927716 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722789 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.927928 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:11 old-k8s-version-452467 kubelet[660]: E1210 00:37:11.722844 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.928279 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:15 old-k8s-version-452467 kubelet[660]: E1210 00:37:15.725496 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.929039 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:25 old-k8s-version-452467 kubelet[660]: E1210 00:37:25.722996 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.929421 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:29 old-k8s-version-452467 kubelet[660]: E1210 00:37:29.722673 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.929773 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722443 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.929980 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722952 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.930207 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.722722 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.930592 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.930992 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.931218 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.931635 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:16 old-k8s-version-452467 kubelet[660]: E1210 00:38:16.722163 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.931868 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:18 old-k8s-version-452467 kubelet[660]: E1210 00:38:18.722562 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1210 00:38:21.931893 1317926 logs.go:123] Gathering logs for container status ...
I1210 00:38:21.931919 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1210 00:38:22.019712 1317926 logs.go:123] Gathering logs for kube-apiserver [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d] ...
I1210 00:38:22.019742 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d"
I1210 00:38:22.106326 1317926 logs.go:123] Gathering logs for coredns [1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154] ...
I1210 00:38:22.106366 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:38:22.151824 1317926 logs.go:123] Gathering logs for dmesg ...
I1210 00:38:22.151856 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1210 00:38:22.173805 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:22.173830 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1210 00:38:22.173879 1317926 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1210 00:38:22.173894 1317926 out.go:270] Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:22.173903 1317926 out.go:270] Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:22.173916 1317926 out.go:270] Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:22.173922 1317926 out.go:270] Dec 10 00:38:16 old-k8s-version-452467 kubelet[660]: E1210 00:38:16.722163 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:38:16 old-k8s-version-452467 kubelet[660]: E1210 00:38:16.722163 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:22.173929 1317926 out.go:270] Dec 10 00:38:18 old-k8s-version-452467 kubelet[660]: E1210 00:38:18.722562 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:38:18 old-k8s-version-452467 kubelet[660]: E1210 00:38:18.722562 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1210 00:38:22.173939 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:22.173945 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:38:32.175795 1317926 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1210 00:38:32.189016 1317926 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I1210 00:38:32.192274 1317926 out.go:201]
W1210 00:38:32.194627 1317926 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1210 00:38:32.194671 1317926 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W1210 00:38:32.194691 1317926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W1210 00:38:32.194703 1317926 out.go:270] *
*
W1210 00:38:32.195702 1317926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1210 00:38:32.197399 1317926 out.go:201]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-452467 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-452467
helpers_test.go:235: (dbg) docker inspect old-k8s-version-452467:
-- stdout --
[
{
"Id": "41a7b9b5940449f4b7ec31ae615d088b7ee60bc20e58c172a4a1dd9efb4c59d6",
"Created": "2024-12-10T00:29:21.190965747Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1318125,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-12-10T00:32:16.828200412Z",
"FinishedAt": "2024-12-10T00:32:15.875636591Z"
},
"Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
"ResolvConfPath": "/var/lib/docker/containers/41a7b9b5940449f4b7ec31ae615d088b7ee60bc20e58c172a4a1dd9efb4c59d6/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/41a7b9b5940449f4b7ec31ae615d088b7ee60bc20e58c172a4a1dd9efb4c59d6/hostname",
"HostsPath": "/var/lib/docker/containers/41a7b9b5940449f4b7ec31ae615d088b7ee60bc20e58c172a4a1dd9efb4c59d6/hosts",
"LogPath": "/var/lib/docker/containers/41a7b9b5940449f4b7ec31ae615d088b7ee60bc20e58c172a4a1dd9efb4c59d6/41a7b9b5940449f4b7ec31ae615d088b7ee60bc20e58c172a4a1dd9efb4c59d6-json.log",
"Name": "/old-k8s-version-452467",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-452467:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-452467",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/d555f2fae945772a7a336deee2aaddb805a5a1f601d618eb5340b7b6ada1848a-init/diff:/var/lib/docker/overlay2/8692fbba86a269ead8dfbecf4f487b545d2db6c8b7e7f4885ed5671002fe31ff/diff",
"MergedDir": "/var/lib/docker/overlay2/d555f2fae945772a7a336deee2aaddb805a5a1f601d618eb5340b7b6ada1848a/merged",
"UpperDir": "/var/lib/docker/overlay2/d555f2fae945772a7a336deee2aaddb805a5a1f601d618eb5340b7b6ada1848a/diff",
"WorkDir": "/var/lib/docker/overlay2/d555f2fae945772a7a336deee2aaddb805a5a1f601d618eb5340b7b6ada1848a/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-452467",
"Source": "/var/lib/docker/volumes/old-k8s-version-452467/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-452467",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-452467",
"name.minikube.sigs.k8s.io": "old-k8s-version-452467",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "d7f58e038c2141e9ae0e511358f5bb982cfb93c37252fce786f04fb60d122ea8",
"SandboxKey": "/var/run/docker/netns/d7f58e038c21",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34523"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34524"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34527"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34525"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34526"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-452467": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null,
"NetworkID": "ee0b68f1d4602ef77a503e690edacae0f9c269037c9cab78566c909bb7f840c1",
"EndpointID": "9fbd826ffd306210ecc19f141c34a48437eed5ff53d948a389df3ef028766fbe",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-452467",
"41a7b9b59404"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-452467 -n old-k8s-version-452467
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-452467 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-452467 logs -n 25: (3.437413232s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
| start | -p force-systemd-flag-166504 | force-systemd-flag-166504 | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:28 UTC |
| | --memory=2048 --force-systemd | | | | | |
| | --alsologtostderr | | | | | |
| | -v=5 --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-flag-166504 | force-systemd-flag-166504 | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:28 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-flag-166504 | force-systemd-flag-166504 | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:28 UTC |
| start | -p cert-options-734210 | cert-options-734210 | jenkins | v1.34.0 | 10 Dec 24 00:28 UTC | 10 Dec 24 00:29 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-734210 ssh | cert-options-734210 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-734210 -- sudo | cert-options-734210 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-734210 | cert-options-734210 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:29 UTC |
| start | -p old-k8s-version-452467 | old-k8s-version-452467 | jenkins | v1.34.0 | 10 Dec 24 00:29 UTC | 10 Dec 24 00:31 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-672485 | cert-expiration-672485 | jenkins | v1.34.0 | 10 Dec 24 00:31 UTC | 10 Dec 24 00:31 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-672485 | cert-expiration-672485 | jenkins | v1.34.0 | 10 Dec 24 00:31 UTC | 10 Dec 24 00:31 UTC |
| start | -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:31 UTC | 10 Dec 24 00:32 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-452467 | old-k8s-version-452467 | jenkins | v1.34.0 | 10 Dec 24 00:32 UTC | 10 Dec 24 00:32 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-452467 | old-k8s-version-452467 | jenkins | v1.34.0 | 10 Dec 24 00:32 UTC | 10 Dec 24 00:32 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-452467 | old-k8s-version-452467 | jenkins | v1.34.0 | 10 Dec 24 00:32 UTC | 10 Dec 24 00:32 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-452467 | old-k8s-version-452467 | jenkins | v1.34.0 | 10 Dec 24 00:32 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:32 UTC | 10 Dec 24 00:32 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:32 UTC | 10 Dec 24 00:32 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:32 UTC | 10 Dec 24 00:32 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:32 UTC | 10 Dec 24 00:37 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| image | no-preload-528963 image list | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:37 UTC | 10 Dec 24 00:37 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:37 UTC | 10 Dec 24 00:38 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:38 UTC | 10 Dec 24 00:38 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:38 UTC | 10 Dec 24 00:38 UTC |
| delete | -p no-preload-528963 | no-preload-528963 | jenkins | v1.34.0 | 10 Dec 24 00:38 UTC | 10 Dec 24 00:38 UTC |
| start | -p embed-certs-401547 | embed-certs-401547 | jenkins | v1.34.0 | 10 Dec 24 00:38 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/12/10 00:38:05
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.23.2 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1210 00:38:05.981949 1327579 out.go:345] Setting OutFile to fd 1 ...
I1210 00:38:05.982080 1327579 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:38:05.982091 1327579 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:05.982097 1327579 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:38:05.982347 1327579 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20062-1103064/.minikube/bin
I1210 00:38:05.982752 1327579 out.go:352] Setting JSON to false
I1210 00:38:05.983752 1327579 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":30014,"bootTime":1733761072,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1210 00:38:05.983829 1327579 start.go:139] virtualization:
I1210 00:38:05.986396 1327579 out.go:177] * [embed-certs-401547] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1210 00:38:05.989074 1327579 out.go:177] - MINIKUBE_LOCATION=20062
I1210 00:38:05.989247 1327579 notify.go:220] Checking for updates...
I1210 00:38:05.993282 1327579 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1210 00:38:05.995638 1327579 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20062-1103064/kubeconfig
I1210 00:38:05.998007 1327579 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20062-1103064/.minikube
I1210 00:38:06.000412 1327579 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1210 00:38:06.007153 1327579 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1210 00:38:06.010471 1327579 config.go:182] Loaded profile config "old-k8s-version-452467": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1210 00:38:06.010625 1327579 driver.go:394] Setting default libvirt URI to qemu:///system
I1210 00:38:06.039178 1327579 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
I1210 00:38:06.039314 1327579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 00:38:06.113398 1327579 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-10 00:38:06.10383499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1210 00:38:06.113517 1327579 docker.go:318] overlay module found
I1210 00:38:06.116370 1327579 out.go:177] * Using the docker driver based on user configuration
I1210 00:38:06.119002 1327579 start.go:297] selected driver: docker
I1210 00:38:06.119024 1327579 start.go:901] validating driver "docker" against <nil>
I1210 00:38:06.119040 1327579 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1210 00:38:06.119798 1327579 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1210 00:38:06.191423 1327579 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-10 00:38:06.181230846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1210 00:38:06.191639 1327579 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1210 00:38:06.192049 1327579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1210 00:38:06.194957 1327579 out.go:177] * Using Docker driver with root privileges
I1210 00:38:06.197605 1327579 cni.go:84] Creating CNI manager for ""
I1210 00:38:06.197686 1327579 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1210 00:38:06.197718 1327579 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I1210 00:38:06.197813 1327579 start.go:340] cluster config:
{Name:embed-certs-401547 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-401547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 00:38:06.200761 1327579 out.go:177] * Starting "embed-certs-401547" primary control-plane node in "embed-certs-401547" cluster
I1210 00:38:06.203432 1327579 cache.go:121] Beginning downloading kic base image for docker with containerd
I1210 00:38:06.206251 1327579 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
I1210 00:38:06.208926 1327579 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1210 00:38:06.208958 1327579 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
I1210 00:38:06.208976 1327579 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20062-1103064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
I1210 00:38:06.208985 1327579 cache.go:56] Caching tarball of preloaded images
I1210 00:38:06.209065 1327579 preload.go:172] Found /home/jenkins/minikube-integration/20062-1103064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1210 00:38:06.209075 1327579 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
I1210 00:38:06.209187 1327579 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/config.json ...
I1210 00:38:06.209206 1327579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/config.json: {Name:mk5f89016f09e38a62f17856fe0596263b309c57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:38:06.229747 1327579 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
I1210 00:38:06.229773 1327579 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
I1210 00:38:06.229792 1327579 cache.go:194] Successfully downloaded all kic artifacts
I1210 00:38:06.229824 1327579 start.go:360] acquireMachinesLock for embed-certs-401547: {Name:mk6e00f537cec50748df81d3d0449d0e14c36725 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1210 00:38:06.230940 1327579 start.go:364] duration metric: took 1.093625ms to acquireMachinesLock for "embed-certs-401547"
I1210 00:38:06.230980 1327579 start.go:93] Provisioning new machine with config: &{Name:embed-certs-401547 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-401547 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1210 00:38:06.231062 1327579 start.go:125] createHost starting for "" (driver="docker")
I1210 00:38:02.951478 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:38:05.406841 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:38:06.234406 1327579 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I1210 00:38:06.234646 1327579 start.go:159] libmachine.API.Create for "embed-certs-401547" (driver="docker")
I1210 00:38:06.234677 1327579 client.go:168] LocalClient.Create starting
I1210 00:38:06.234759 1327579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem
I1210 00:38:06.234799 1327579 main.go:141] libmachine: Decoding PEM data...
I1210 00:38:06.234813 1327579 main.go:141] libmachine: Parsing certificate...
I1210 00:38:06.234866 1327579 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/cert.pem
I1210 00:38:06.234891 1327579 main.go:141] libmachine: Decoding PEM data...
I1210 00:38:06.234907 1327579 main.go:141] libmachine: Parsing certificate...
I1210 00:38:06.235285 1327579 cli_runner.go:164] Run: docker network inspect embed-certs-401547 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1210 00:38:06.251362 1327579 cli_runner.go:211] docker network inspect embed-certs-401547 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1210 00:38:06.251472 1327579 network_create.go:284] running [docker network inspect embed-certs-401547] to gather additional debugging logs...
I1210 00:38:06.251512 1327579 cli_runner.go:164] Run: docker network inspect embed-certs-401547
W1210 00:38:06.267971 1327579 cli_runner.go:211] docker network inspect embed-certs-401547 returned with exit code 1
I1210 00:38:06.268005 1327579 network_create.go:287] error running [docker network inspect embed-certs-401547]: docker network inspect embed-certs-401547: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-401547 not found
I1210 00:38:06.268019 1327579 network_create.go:289] output of [docker network inspect embed-certs-401547]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-401547 not found
** /stderr **
I1210 00:38:06.268117 1327579 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 00:38:06.285588 1327579 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-087bea50f996 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:d9:e2:81:7b} reservation:<nil>}
I1210 00:38:06.286040 1327579 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-775e5a16c2d7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:20:a0:f5:d8} reservation:<nil>}
I1210 00:38:06.286469 1327579 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-16f0bacdff9a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:e4:64:d3:c9} reservation:<nil>}
I1210 00:38:06.287005 1327579 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2c470}
I1210 00:38:06.287028 1327579 network_create.go:124] attempt to create docker network embed-certs-401547 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1210 00:38:06.287085 1327579 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-401547 embed-certs-401547
I1210 00:38:06.364301 1327579 network_create.go:108] docker network embed-certs-401547 192.168.76.0/24 created
I1210 00:38:06.364332 1327579 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-401547" container
I1210 00:38:06.364419 1327579 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1210 00:38:06.391119 1327579 cli_runner.go:164] Run: docker volume create embed-certs-401547 --label name.minikube.sigs.k8s.io=embed-certs-401547 --label created_by.minikube.sigs.k8s.io=true
I1210 00:38:06.414817 1327579 oci.go:103] Successfully created a docker volume embed-certs-401547
I1210 00:38:06.414914 1327579 cli_runner.go:164] Run: docker run --rm --name embed-certs-401547-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-401547 --entrypoint /usr/bin/test -v embed-certs-401547:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
I1210 00:38:07.111273 1327579 oci.go:107] Successfully prepared a docker volume embed-certs-401547
I1210 00:38:07.111327 1327579 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1210 00:38:07.111347 1327579 kic.go:194] Starting extracting preloaded images to volume ...
I1210 00:38:07.111426 1327579 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20062-1103064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-401547:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
I1210 00:38:07.905845 1317926 pod_ready.go:103] pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace has status "Ready":"False"
I1210 00:38:07.905872 1317926 pod_ready.go:82] duration metric: took 4m0.008075913s for pod "metrics-server-9975d5f86-kls2p" in "kube-system" namespace to be "Ready" ...
E1210 00:38:07.905884 1317926 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1210 00:38:07.905892 1317926 pod_ready.go:39] duration metric: took 5m27.731687929s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1210 00:38:07.905907 1317926 api_server.go:52] waiting for apiserver process to appear ...
I1210 00:38:07.905936 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1210 00:38:07.905996 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1210 00:38:07.959282 1317926 cri.go:89] found id: "d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d"
I1210 00:38:07.959302 1317926 cri.go:89] found id: "47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:38:07.959307 1317926 cri.go:89] found id: ""
I1210 00:38:07.959314 1317926 logs.go:282] 2 containers: [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d 47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85]
I1210 00:38:07.959385 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:07.963458 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:07.967452 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1210 00:38:07.967524 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1210 00:38:08.015124 1317926 cri.go:89] found id: "4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4"
I1210 00:38:08.015146 1317926 cri.go:89] found id: "f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:38:08.015151 1317926 cri.go:89] found id: ""
I1210 00:38:08.015158 1317926 logs.go:282] 2 containers: [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4 f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76]
I1210 00:38:08.015220 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.019565 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.023561 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1210 00:38:08.023677 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1210 00:38:08.073248 1317926 cri.go:89] found id: "8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051"
I1210 00:38:08.073324 1317926 cri.go:89] found id: "1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:38:08.073373 1317926 cri.go:89] found id: ""
I1210 00:38:08.073398 1317926 logs.go:282] 2 containers: [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051 1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154]
I1210 00:38:08.073478 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.077764 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.081739 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1210 00:38:08.081857 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1210 00:38:08.144612 1317926 cri.go:89] found id: "c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077"
I1210 00:38:08.144694 1317926 cri.go:89] found id: "47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:38:08.144713 1317926 cri.go:89] found id: ""
I1210 00:38:08.144737 1317926 logs.go:282] 2 containers: [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077 47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869]
I1210 00:38:08.144827 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.149481 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.153535 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1210 00:38:08.153660 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1210 00:38:08.203599 1317926 cri.go:89] found id: "a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e"
I1210 00:38:08.203674 1317926 cri.go:89] found id: "d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:38:08.203692 1317926 cri.go:89] found id: ""
I1210 00:38:08.203717 1317926 logs.go:282] 2 containers: [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717]
I1210 00:38:08.203799 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.208045 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.211792 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1210 00:38:08.211911 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1210 00:38:08.255984 1317926 cri.go:89] found id: "dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0"
I1210 00:38:08.256058 1317926 cri.go:89] found id: "6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:38:08.256077 1317926 cri.go:89] found id: ""
I1210 00:38:08.256101 1317926 logs.go:282] 2 containers: [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0 6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21]
I1210 00:38:08.256184 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.260301 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.263995 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1210 00:38:08.264125 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1210 00:38:08.315154 1317926 cri.go:89] found id: "eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469"
I1210 00:38:08.315229 1317926 cri.go:89] found id: "bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:38:08.315248 1317926 cri.go:89] found id: ""
I1210 00:38:08.315272 1317926 logs.go:282] 2 containers: [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469 bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e]
I1210 00:38:08.315355 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.319457 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.323233 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1210 00:38:08.323357 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1210 00:38:08.371319 1317926 cri.go:89] found id: "07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6"
I1210 00:38:08.371394 1317926 cri.go:89] found id: ""
I1210 00:38:08.371417 1317926 logs.go:282] 1 containers: [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6]
I1210 00:38:08.371509 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.375493 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1210 00:38:08.375616 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1210 00:38:08.429911 1317926 cri.go:89] found id: "c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488"
I1210 00:38:08.429985 1317926 cri.go:89] found id: "7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41"
I1210 00:38:08.430004 1317926 cri.go:89] found id: ""
I1210 00:38:08.430029 1317926 logs.go:282] 2 containers: [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488 7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41]
I1210 00:38:08.430126 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.435122 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:08.439326 1317926 logs.go:123] Gathering logs for kindnet [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469] ...
I1210 00:38:08.439399 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469"
I1210 00:38:08.492661 1317926 logs.go:123] Gathering logs for kindnet [bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e] ...
I1210 00:38:08.492815 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:38:08.542139 1317926 logs.go:123] Gathering logs for kube-apiserver [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d] ...
I1210 00:38:08.542223 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d"
I1210 00:38:08.617414 1317926 logs.go:123] Gathering logs for etcd [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4] ...
I1210 00:38:08.617487 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4"
I1210 00:38:08.667735 1317926 logs.go:123] Gathering logs for coredns [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051] ...
I1210 00:38:08.667809 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051"
I1210 00:38:08.724724 1317926 logs.go:123] Gathering logs for kube-scheduler [47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869] ...
I1210 00:38:08.724798 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:38:08.780190 1317926 logs.go:123] Gathering logs for kube-proxy [d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717] ...
I1210 00:38:08.780378 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:38:08.835091 1317926 logs.go:123] Gathering logs for kube-controller-manager [6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21] ...
I1210 00:38:08.835116 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:38:08.931488 1317926 logs.go:123] Gathering logs for dmesg ...
I1210 00:38:08.931580 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1210 00:38:08.951636 1317926 logs.go:123] Gathering logs for coredns [1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154] ...
I1210 00:38:08.951659 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:38:09.026506 1317926 logs.go:123] Gathering logs for kube-scheduler [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077] ...
I1210 00:38:09.026532 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077"
I1210 00:38:09.087870 1317926 logs.go:123] Gathering logs for kubernetes-dashboard [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6] ...
I1210 00:38:09.087953 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6"
I1210 00:38:09.162884 1317926 logs.go:123] Gathering logs for storage-provisioner [7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41] ...
I1210 00:38:09.162917 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41"
I1210 00:38:09.224509 1317926 logs.go:123] Gathering logs for describe nodes ...
I1210 00:38:09.224616 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1210 00:38:09.398154 1317926 logs.go:123] Gathering logs for kube-apiserver [47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85] ...
I1210 00:38:09.398187 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:38:09.463255 1317926 logs.go:123] Gathering logs for containerd ...
I1210 00:38:09.463295 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1210 00:38:09.532839 1317926 logs.go:123] Gathering logs for container status ...
I1210 00:38:09.532875 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1210 00:38:09.597269 1317926 logs.go:123] Gathering logs for kubelet ...
I1210 00:38:09.597457 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1210 00:38:09.658243 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.134990 660 reflector.go:138] object-"kube-system"/"kindnet-token-jppl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jppl8" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.658519 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135109 660 reflector.go:138] object-"default"/"default-token-cjfvv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-cjfvv" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.658759 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135173 660 reflector.go:138] object-"kube-system"/"kube-proxy-token-kh82p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kh82p" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659016 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135229 660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-t88lj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-t88lj" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659266 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135383 660 reflector.go:138] object-"kube-system"/"metrics-server-token-6t6sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6t6sh" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659512 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135444 660 reflector.go:138] object-"kube-system"/"coredns-token-ltmzn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ltmzn" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659735 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135505 660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.659961 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135561 660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:09.669433 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:44 old-k8s-version-452467 kubelet[660]: E1210 00:32:44.318795 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.669672 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:44 old-k8s-version-452467 kubelet[660]: E1210 00:32:44.966265 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.673687 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:59 old-k8s-version-452467 kubelet[660]: E1210 00:32:59.771822 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.674593 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:13 old-k8s-version-452467 kubelet[660]: E1210 00:33:13.748779 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.675061 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:14 old-k8s-version-452467 kubelet[660]: E1210 00:33:14.246302 660 pod_workers.go:191] Error syncing pod 7cbccf04-1bab-4d0f-b6b7-06642841c9ad ("storage-provisioner_kube-system(7cbccf04-1bab-4d0f-b6b7-06642841c9ad)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7cbccf04-1bab-4d0f-b6b7-06642841c9ad)"
W1210 00:38:09.675681 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:15 old-k8s-version-452467 kubelet[660]: E1210 00:33:15.274011 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.676046 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:16 old-k8s-version-452467 kubelet[660]: E1210 00:33:16.278186 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.676743 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:22 old-k8s-version-452467 kubelet[660]: E1210 00:33:22.652549 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.679222 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:25 old-k8s-version-452467 kubelet[660]: E1210 00:33:25.747166 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.679966 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:36 old-k8s-version-452467 kubelet[660]: E1210 00:33:36.343317 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.680174 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:37 old-k8s-version-452467 kubelet[660]: E1210 00:33:37.723434 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.680528 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:42 old-k8s-version-452467 kubelet[660]: E1210 00:33:42.652410 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.680741 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:49 old-k8s-version-452467 kubelet[660]: E1210 00:33:49.723131 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.681124 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:53 old-k8s-version-452467 kubelet[660]: E1210 00:33:53.722786 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.681333 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:02 old-k8s-version-452467 kubelet[660]: E1210 00:34:02.722788 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.682058 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:08 old-k8s-version-452467 kubelet[660]: E1210 00:34:08.441533 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.682414 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:12 old-k8s-version-452467 kubelet[660]: E1210 00:34:12.653097 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.684941 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:13 old-k8s-version-452467 kubelet[660]: E1210 00:34:13.735304 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.685301 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:25 old-k8s-version-452467 kubelet[660]: E1210 00:34:25.726049 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.685532 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:26 old-k8s-version-452467 kubelet[660]: E1210 00:34:26.722773 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.685742 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:37 old-k8s-version-452467 kubelet[660]: E1210 00:34:37.726330 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.686125 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:37 old-k8s-version-452467 kubelet[660]: E1210 00:34:37.732566 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.686332 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:48 old-k8s-version-452467 kubelet[660]: E1210 00:34:48.723555 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.686951 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:52 old-k8s-version-452467 kubelet[660]: E1210 00:34:52.558062 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.687302 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:53 old-k8s-version-452467 kubelet[660]: E1210 00:34:53.558808 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.687514 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:03 old-k8s-version-452467 kubelet[660]: E1210 00:35:03.722557 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.687868 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:06 old-k8s-version-452467 kubelet[660]: E1210 00:35:06.722239 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.688082 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:18 old-k8s-version-452467 kubelet[660]: E1210 00:35:18.722467 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.688434 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:19 old-k8s-version-452467 kubelet[660]: E1210 00:35:19.722179 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.688647 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:31 old-k8s-version-452467 kubelet[660]: E1210 00:35:31.727727 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.689027 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:32 old-k8s-version-452467 kubelet[660]: E1210 00:35:32.722255 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.691682 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:42 old-k8s-version-452467 kubelet[660]: E1210 00:35:42.732073 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:09.692088 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:43 old-k8s-version-452467 kubelet[660]: E1210 00:35:43.722214 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.692303 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:53 old-k8s-version-452467 kubelet[660]: E1210 00:35:53.730978 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.692654 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:56 old-k8s-version-452467 kubelet[660]: E1210 00:35:56.722173 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.692866 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:07 old-k8s-version-452467 kubelet[660]: E1210 00:36:07.722744 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.693239 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:10 old-k8s-version-452467 kubelet[660]: E1210 00:36:10.722185 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.693458 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:21 old-k8s-version-452467 kubelet[660]: E1210 00:36:21.723017 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.694157 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:24 old-k8s-version-452467 kubelet[660]: E1210 00:36:24.846395 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.694546 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:32 old-k8s-version-452467 kubelet[660]: E1210 00:36:32.652988 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.694797 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:34 old-k8s-version-452467 kubelet[660]: E1210 00:36:34.722539 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.695149 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:45 old-k8s-version-452467 kubelet[660]: E1210 00:36:45.725415 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.695363 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:49 old-k8s-version-452467 kubelet[660]: E1210 00:36:49.722539 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.695704 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722584 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.695929 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722789 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.696145 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:11 old-k8s-version-452467 kubelet[660]: E1210 00:37:11.722844 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.696492 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:15 old-k8s-version-452467 kubelet[660]: E1210 00:37:15.725496 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.696706 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:25 old-k8s-version-452467 kubelet[660]: E1210 00:37:25.722996 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.697070 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:29 old-k8s-version-452467 kubelet[660]: E1210 00:37:29.722673 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.697441 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722443 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.697652 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722952 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.697858 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.722722 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.698226 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.698578 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.698785 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1210 00:38:09.698818 1317926 logs.go:123] Gathering logs for etcd [f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76] ...
I1210 00:38:09.698854 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:38:09.762841 1317926 logs.go:123] Gathering logs for kube-proxy [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e] ...
I1210 00:38:09.762925 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e"
I1210 00:38:09.816292 1317926 logs.go:123] Gathering logs for kube-controller-manager [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0] ...
I1210 00:38:09.816369 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0"
I1210 00:38:09.893094 1317926 logs.go:123] Gathering logs for storage-provisioner [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488] ...
I1210 00:38:09.893197 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488"
I1210 00:38:09.946084 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:09.946116 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1210 00:38:09.946187 1317926 out.go:270] X Problems detected in kubelet:
W1210 00:38:09.946206 1317926 out.go:270] Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722952 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.946212 1317926 out.go:270] Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.722722 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:09.946225 1317926 out.go:270] Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.946250 1317926 out.go:270] Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:09.946284 1317926 out.go:270] Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1210 00:38:09.946290 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:09.946295 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:38:12.317999 1327579 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20062-1103064/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-401547:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (5.206528127s)
I1210 00:38:12.318032 1327579 kic.go:203] duration metric: took 5.206680977s to extract preloaded images to volume ...
W1210 00:38:12.318165 1327579 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1210 00:38:12.318305 1327579 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1210 00:38:12.388196 1327579 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-401547 --name embed-certs-401547 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-401547 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-401547 --network embed-certs-401547 --ip 192.168.76.2 --volume embed-certs-401547:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
I1210 00:38:12.734901 1327579 cli_runner.go:164] Run: docker container inspect embed-certs-401547 --format={{.State.Running}}
I1210 00:38:12.760901 1327579 cli_runner.go:164] Run: docker container inspect embed-certs-401547 --format={{.State.Status}}
I1210 00:38:12.787246 1327579 cli_runner.go:164] Run: docker exec embed-certs-401547 stat /var/lib/dpkg/alternatives/iptables
I1210 00:38:12.836873 1327579 oci.go:144] the created container "embed-certs-401547" has a running status.
I1210 00:38:12.836901 1327579 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20062-1103064/.minikube/machines/embed-certs-401547/id_rsa...
I1210 00:38:13.295836 1327579 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20062-1103064/.minikube/machines/embed-certs-401547/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1210 00:38:13.324410 1327579 cli_runner.go:164] Run: docker container inspect embed-certs-401547 --format={{.State.Status}}
I1210 00:38:13.347236 1327579 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1210 00:38:13.347260 1327579 kic_runner.go:114] Args: [docker exec --privileged embed-certs-401547 chown docker:docker /home/docker/.ssh/authorized_keys]
I1210 00:38:13.414938 1327579 cli_runner.go:164] Run: docker container inspect embed-certs-401547 --format={{.State.Status}}
I1210 00:38:13.436594 1327579 machine.go:93] provisionDockerMachine start ...
I1210 00:38:13.436703 1327579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-401547
I1210 00:38:13.458619 1327579 main.go:141] libmachine: Using SSH client type: native
I1210 00:38:13.458904 1327579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 34533 <nil> <nil>}
I1210 00:38:13.458921 1327579 main.go:141] libmachine: About to run SSH command:
hostname
I1210 00:38:13.459559 1327579 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1210 00:38:16.592819 1327579 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-401547
I1210 00:38:16.592847 1327579 ubuntu.go:169] provisioning hostname "embed-certs-401547"
I1210 00:38:16.592923 1327579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-401547
I1210 00:38:16.614085 1327579 main.go:141] libmachine: Using SSH client type: native
I1210 00:38:16.614333 1327579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 34533 <nil> <nil>}
I1210 00:38:16.614351 1327579 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-401547 && echo "embed-certs-401547" | sudo tee /etc/hostname
I1210 00:38:16.749818 1327579 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-401547
I1210 00:38:16.749913 1327579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-401547
I1210 00:38:16.767792 1327579 main.go:141] libmachine: Using SSH client type: native
I1210 00:38:16.768033 1327579 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 34533 <nil> <nil>}
I1210 00:38:16.768056 1327579 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-401547' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-401547/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-401547' | sudo tee -a /etc/hosts;
fi
fi
I1210 00:38:16.889835 1327579 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1210 00:38:16.889860 1327579 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20062-1103064/.minikube CaCertPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20062-1103064/.minikube}
I1210 00:38:16.889879 1327579 ubuntu.go:177] setting up certificates
I1210 00:38:16.889889 1327579 provision.go:84] configureAuth start
I1210 00:38:16.889951 1327579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-401547
I1210 00:38:16.908189 1327579 provision.go:143] copyHostCerts
I1210 00:38:16.908255 1327579 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.pem, removing ...
I1210 00:38:16.908273 1327579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.pem
I1210 00:38:16.908347 1327579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.pem (1078 bytes)
I1210 00:38:16.908433 1327579 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-1103064/.minikube/cert.pem, removing ...
I1210 00:38:16.908437 1327579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-1103064/.minikube/cert.pem
I1210 00:38:16.908463 1327579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20062-1103064/.minikube/cert.pem (1123 bytes)
I1210 00:38:16.908512 1327579 exec_runner.go:144] found /home/jenkins/minikube-integration/20062-1103064/.minikube/key.pem, removing ...
I1210 00:38:16.908516 1327579 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20062-1103064/.minikube/key.pem
I1210 00:38:16.908538 1327579 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20062-1103064/.minikube/key.pem (1679 bytes)
I1210 00:38:16.908599 1327579 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca-key.pem org=jenkins.embed-certs-401547 san=[127.0.0.1 192.168.76.2 embed-certs-401547 localhost minikube]
I1210 00:38:17.178097 1327579 provision.go:177] copyRemoteCerts
I1210 00:38:17.178213 1327579 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1210 00:38:17.178274 1327579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-401547
I1210 00:38:17.195650 1327579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/embed-certs-401547/id_rsa Username:docker}
I1210 00:38:17.286110 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1210 00:38:17.312855 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1210 00:38:17.339087 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1210 00:38:17.364679 1327579 provision.go:87] duration metric: took 474.775962ms to configureAuth
I1210 00:38:17.364703 1327579 ubuntu.go:193] setting minikube options for container-runtime
I1210 00:38:17.364930 1327579 config.go:182] Loaded profile config "embed-certs-401547": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1210 00:38:17.364943 1327579 machine.go:96] duration metric: took 3.928326705s to provisionDockerMachine
I1210 00:38:17.364951 1327579 client.go:171] duration metric: took 11.13026361s to LocalClient.Create
I1210 00:38:17.364971 1327579 start.go:167] duration metric: took 11.130326952s to libmachine.API.Create "embed-certs-401547"
I1210 00:38:17.364981 1327579 start.go:293] postStartSetup for "embed-certs-401547" (driver="docker")
I1210 00:38:17.364990 1327579 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1210 00:38:17.365050 1327579 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1210 00:38:17.365107 1327579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-401547
I1210 00:38:17.382972 1327579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/embed-certs-401547/id_rsa Username:docker}
I1210 00:38:17.474697 1327579 ssh_runner.go:195] Run: cat /etc/os-release
I1210 00:38:17.478141 1327579 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1210 00:38:17.478176 1327579 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1210 00:38:17.478187 1327579 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1210 00:38:17.478195 1327579 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1210 00:38:17.478206 1327579 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-1103064/.minikube/addons for local assets ...
I1210 00:38:17.478276 1327579 filesync.go:126] Scanning /home/jenkins/minikube-integration/20062-1103064/.minikube/files for local assets ...
I1210 00:38:17.478364 1327579 filesync.go:149] local asset: /home/jenkins/minikube-integration/20062-1103064/.minikube/files/etc/ssl/certs/11084502.pem -> 11084502.pem in /etc/ssl/certs
I1210 00:38:17.478483 1327579 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1210 00:38:17.487191 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/files/etc/ssl/certs/11084502.pem --> /etc/ssl/certs/11084502.pem (1708 bytes)
I1210 00:38:17.522917 1327579 start.go:296] duration metric: took 157.921872ms for postStartSetup
I1210 00:38:17.523342 1327579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-401547
I1210 00:38:17.539837 1327579 profile.go:143] Saving config to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/config.json ...
I1210 00:38:17.540124 1327579 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1210 00:38:17.540174 1327579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-401547
I1210 00:38:17.556479 1327579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/embed-certs-401547/id_rsa Username:docker}
I1210 00:38:17.646238 1327579 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1210 00:38:17.650683 1327579 start.go:128] duration metric: took 11.419604201s to createHost
I1210 00:38:17.650710 1327579 start.go:83] releasing machines lock for "embed-certs-401547", held for 11.419751085s
I1210 00:38:17.650785 1327579 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-401547
I1210 00:38:17.667615 1327579 ssh_runner.go:195] Run: cat /version.json
I1210 00:38:17.667669 1327579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-401547
I1210 00:38:17.667723 1327579 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1210 00:38:17.667803 1327579 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-401547
I1210 00:38:17.685722 1327579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/embed-certs-401547/id_rsa Username:docker}
I1210 00:38:17.689352 1327579 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34533 SSHKeyPath:/home/jenkins/minikube-integration/20062-1103064/.minikube/machines/embed-certs-401547/id_rsa Username:docker}
I1210 00:38:17.780918 1327579 ssh_runner.go:195] Run: systemctl --version
I1210 00:38:17.916748 1327579 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1210 00:38:17.921184 1327579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1210 00:38:17.948504 1327579 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1210 00:38:17.948594 1327579 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1210 00:38:17.980680 1327579 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I1210 00:38:17.980704 1327579 start.go:495] detecting cgroup driver to use...
I1210 00:38:17.980748 1327579 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1210 00:38:17.980801 1327579 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1210 00:38:17.994076 1327579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1210 00:38:18.014666 1327579 docker.go:217] disabling cri-docker service (if available) ...
I1210 00:38:18.014756 1327579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1210 00:38:18.034693 1327579 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1210 00:38:18.050961 1327579 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1210 00:38:18.147819 1327579 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1210 00:38:18.241415 1327579 docker.go:233] disabling docker service ...
I1210 00:38:18.241521 1327579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1210 00:38:18.266790 1327579 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1210 00:38:18.279604 1327579 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1210 00:38:18.367286 1327579 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1210 00:38:18.453019 1327579 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1210 00:38:18.465590 1327579 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1210 00:38:18.485801 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1210 00:38:18.496952 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1210 00:38:18.513944 1327579 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1210 00:38:18.514011 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1210 00:38:18.524010 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1210 00:38:18.535032 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1210 00:38:18.545069 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1210 00:38:18.554877 1327579 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1210 00:38:18.564080 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1210 00:38:18.574154 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1210 00:38:18.584277 1327579 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1210 00:38:18.595896 1327579 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1210 00:38:18.604640 1327579 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1210 00:38:18.613647 1327579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 00:38:18.698963 1327579 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1210 00:38:18.840574 1327579 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1210 00:38:18.840690 1327579 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1210 00:38:18.844635 1327579 start.go:563] Will wait 60s for crictl version
I1210 00:38:18.844751 1327579 ssh_runner.go:195] Run: which crictl
I1210 00:38:18.848442 1327579 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1210 00:38:18.899144 1327579 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1210 00:38:18.899261 1327579 ssh_runner.go:195] Run: containerd --version
I1210 00:38:18.922215 1327579 ssh_runner.go:195] Run: containerd --version
I1210 00:38:18.949966 1327579 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
I1210 00:38:18.953005 1327579 cli_runner.go:164] Run: docker network inspect embed-certs-401547 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1210 00:38:18.969909 1327579 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1210 00:38:18.973708 1327579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1210 00:38:18.984985 1327579 kubeadm.go:883] updating cluster {Name:embed-certs-401547 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-401547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1210 00:38:18.985109 1327579 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1210 00:38:18.985165 1327579 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 00:38:19.029911 1327579 containerd.go:627] all images are preloaded for containerd runtime.
I1210 00:38:19.029935 1327579 containerd.go:534] Images already preloaded, skipping extraction
I1210 00:38:19.029999 1327579 ssh_runner.go:195] Run: sudo crictl images --output json
I1210 00:38:19.068859 1327579 containerd.go:627] all images are preloaded for containerd runtime.
I1210 00:38:19.068882 1327579 cache_images.go:84] Images are preloaded, skipping loading
I1210 00:38:19.068891 1327579 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.2 containerd true true} ...
I1210 00:38:19.069028 1327579 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-401547 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:embed-certs-401547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1210 00:38:19.069110 1327579 ssh_runner.go:195] Run: sudo crictl info
I1210 00:38:19.106071 1327579 cni.go:84] Creating CNI manager for ""
I1210 00:38:19.106094 1327579 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1210 00:38:19.106103 1327579 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1210 00:38:19.106130 1327579 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-401547 NodeName:embed-certs-401547 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1210 00:38:19.106248 1327579 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-401547"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1210 00:38:19.106317 1327579 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1210 00:38:19.115294 1327579 binaries.go:44] Found k8s binaries, skipping transfer
I1210 00:38:19.115384 1327579 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1210 00:38:19.124384 1327579 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I1210 00:38:19.143212 1327579 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1210 00:38:19.162263 1327579 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I1210 00:38:19.196782 1327579 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1210 00:38:19.200267 1327579 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1210 00:38:19.211257 1327579 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1210 00:38:19.299204 1327579 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1210 00:38:19.315189 1327579 certs.go:68] Setting up /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547 for IP: 192.168.76.2
I1210 00:38:19.315209 1327579 certs.go:194] generating shared ca certs ...
I1210 00:38:19.315225 1327579 certs.go:226] acquiring lock for ca certs: {Name:mkd7f0f0a5f922d78bc3f70822a394d56641c333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:38:19.315372 1327579 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.key
I1210 00:38:19.315418 1327579 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/proxy-client-ca.key
I1210 00:38:19.315425 1327579 certs.go:256] generating profile certs ...
I1210 00:38:19.315479 1327579 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/client.key
I1210 00:38:19.315491 1327579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/client.crt with IP's: []
I1210 00:38:20.319187 1327579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/client.crt ...
I1210 00:38:20.319218 1327579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/client.crt: {Name:mk36c3fec5eaf57f999e70a7019a84f86379a11f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:38:20.319425 1327579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/client.key ...
I1210 00:38:20.319441 1327579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/client.key: {Name:mkd16635cec8aec1560df6fbd5cfd84898e97e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:38:20.320273 1327579 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.key.73413064
I1210 00:38:20.320297 1327579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.crt.73413064 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1210 00:38:20.927423 1327579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.crt.73413064 ...
I1210 00:38:20.927454 1327579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.crt.73413064: {Name:mkf505cf0ae8fff1775cb135c00956545631bdd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:38:20.927748 1327579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.key.73413064 ...
I1210 00:38:20.927769 1327579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.key.73413064: {Name:mk7ce56de1ccc869314fe703789cf92b6084970f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:38:20.928758 1327579 certs.go:381] copying /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.crt.73413064 -> /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.crt
I1210 00:38:20.928903 1327579 certs.go:385] copying /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.key.73413064 -> /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.key
I1210 00:38:20.929004 1327579 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/proxy-client.key
I1210 00:38:20.929055 1327579 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/proxy-client.crt with IP's: []
I1210 00:38:19.946739 1317926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1210 00:38:19.959371 1317926 api_server.go:72] duration metric: took 5m56.221058206s to wait for apiserver process to appear ...
I1210 00:38:19.959394 1317926 api_server.go:88] waiting for apiserver healthz status ...
I1210 00:38:19.959429 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1210 00:38:19.959490 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1210 00:38:20.010592 1317926 cri.go:89] found id: "d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d"
I1210 00:38:20.010617 1317926 cri.go:89] found id: "47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:38:20.010622 1317926 cri.go:89] found id: ""
I1210 00:38:20.010630 1317926 logs.go:282] 2 containers: [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d 47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85]
I1210 00:38:20.010698 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.016138 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.020971 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1210 00:38:20.021101 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1210 00:38:20.081403 1317926 cri.go:89] found id: "4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4"
I1210 00:38:20.081424 1317926 cri.go:89] found id: "f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:38:20.081429 1317926 cri.go:89] found id: ""
I1210 00:38:20.081436 1317926 logs.go:282] 2 containers: [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4 f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76]
I1210 00:38:20.081501 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.086290 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.091233 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1210 00:38:20.091381 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1210 00:38:20.149418 1317926 cri.go:89] found id: "8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051"
I1210 00:38:20.149499 1317926 cri.go:89] found id: "1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:38:20.149518 1317926 cri.go:89] found id: ""
I1210 00:38:20.149548 1317926 logs.go:282] 2 containers: [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051 1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154]
I1210 00:38:20.149657 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.158679 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.167569 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1210 00:38:20.167748 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1210 00:38:20.239974 1317926 cri.go:89] found id: "c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077"
I1210 00:38:20.240036 1317926 cri.go:89] found id: "47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:38:20.240063 1317926 cri.go:89] found id: ""
I1210 00:38:20.240086 1317926 logs.go:282] 2 containers: [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077 47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869]
I1210 00:38:20.240172 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.244686 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.248645 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1210 00:38:20.248717 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1210 00:38:20.305795 1317926 cri.go:89] found id: "a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e"
I1210 00:38:20.305814 1317926 cri.go:89] found id: "d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:38:20.305819 1317926 cri.go:89] found id: ""
I1210 00:38:20.305826 1317926 logs.go:282] 2 containers: [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717]
I1210 00:38:20.305885 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.309984 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.314071 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1210 00:38:20.314146 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1210 00:38:20.364110 1317926 cri.go:89] found id: "dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0"
I1210 00:38:20.364130 1317926 cri.go:89] found id: "6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:38:20.364146 1317926 cri.go:89] found id: ""
I1210 00:38:20.364153 1317926 logs.go:282] 2 containers: [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0 6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21]
I1210 00:38:20.364210 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.368655 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.373007 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1210 00:38:20.373132 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1210 00:38:20.421181 1317926 cri.go:89] found id: "eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469"
I1210 00:38:20.421255 1317926 cri.go:89] found id: "bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:38:20.421288 1317926 cri.go:89] found id: ""
I1210 00:38:20.421316 1317926 logs.go:282] 2 containers: [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469 bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e]
I1210 00:38:20.421412 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.425659 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.429627 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1210 00:38:20.429781 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1210 00:38:20.478682 1317926 cri.go:89] found id: "07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6"
I1210 00:38:20.478742 1317926 cri.go:89] found id: ""
I1210 00:38:20.478771 1317926 logs.go:282] 1 containers: [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6]
I1210 00:38:20.478860 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.483107 1317926 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1210 00:38:20.483228 1317926 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1210 00:38:20.548056 1317926 cri.go:89] found id: "c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488"
I1210 00:38:20.548129 1317926 cri.go:89] found id: "7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41"
I1210 00:38:20.548149 1317926 cri.go:89] found id: ""
I1210 00:38:20.548173 1317926 logs.go:282] 2 containers: [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488 7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41]
I1210 00:38:20.548261 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.552522 1317926 ssh_runner.go:195] Run: which crictl
I1210 00:38:20.556568 1317926 logs.go:123] Gathering logs for kube-scheduler [47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869] ...
I1210 00:38:20.556645 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869"
I1210 00:38:20.633390 1317926 logs.go:123] Gathering logs for kube-proxy [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e] ...
I1210 00:38:20.633461 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e"
I1210 00:38:20.688605 1317926 logs.go:123] Gathering logs for kindnet [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469] ...
I1210 00:38:20.688633 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469"
I1210 00:38:20.743156 1317926 logs.go:123] Gathering logs for kindnet [bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e] ...
I1210 00:38:20.743229 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e"
I1210 00:38:20.792834 1317926 logs.go:123] Gathering logs for kubernetes-dashboard [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6] ...
I1210 00:38:20.792860 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6"
I1210 00:38:20.839415 1317926 logs.go:123] Gathering logs for storage-provisioner [7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41] ...
I1210 00:38:20.839448 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41"
I1210 00:38:20.886158 1317926 logs.go:123] Gathering logs for coredns [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051] ...
I1210 00:38:20.886182 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051"
I1210 00:38:20.932316 1317926 logs.go:123] Gathering logs for kube-scheduler [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077] ...
I1210 00:38:20.932339 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077"
I1210 00:38:20.979376 1317926 logs.go:123] Gathering logs for containerd ...
I1210 00:38:20.979401 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1210 00:38:21.052205 1317926 logs.go:123] Gathering logs for etcd [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4] ...
I1210 00:38:21.052243 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4"
I1210 00:38:21.169258 1317926 logs.go:123] Gathering logs for describe nodes ...
I1210 00:38:21.169454 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1210 00:38:21.369081 1317926 logs.go:123] Gathering logs for kube-apiserver [47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85] ...
I1210 00:38:21.369170 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85"
I1210 00:38:21.232572 1327579 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/proxy-client.crt ...
I1210 00:38:21.232615 1327579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/proxy-client.crt: {Name:mk2fe463a076bd1fe940947188f5289bf38ee25c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:38:21.233570 1327579 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/proxy-client.key ...
I1210 00:38:21.233596 1327579 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/proxy-client.key: {Name:mk728410d752aa2fd1d8cae701df38a1aa406281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1210 00:38:21.234477 1327579 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/1108450.pem (1338 bytes)
W1210 00:38:21.234561 1327579 certs.go:480] ignoring /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/1108450_empty.pem, impossibly tiny 0 bytes
I1210 00:38:21.234578 1327579 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca-key.pem (1675 bytes)
I1210 00:38:21.234629 1327579 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/ca.pem (1078 bytes)
I1210 00:38:21.234693 1327579 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/cert.pem (1123 bytes)
I1210 00:38:21.234747 1327579 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/key.pem (1679 bytes)
I1210 00:38:21.234844 1327579 certs.go:484] found cert: /home/jenkins/minikube-integration/20062-1103064/.minikube/files/etc/ssl/certs/11084502.pem (1708 bytes)
I1210 00:38:21.235691 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1210 00:38:21.269222 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1210 00:38:21.305818 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1210 00:38:21.339776 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1210 00:38:21.378321 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I1210 00:38:21.406772 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1210 00:38:21.438719 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1210 00:38:21.472753 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/profiles/embed-certs-401547/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1210 00:38:21.515205 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/files/etc/ssl/certs/11084502.pem --> /usr/share/ca-certificates/11084502.pem (1708 bytes)
I1210 00:38:21.552733 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1210 00:38:21.581262 1327579 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20062-1103064/.minikube/certs/1108450.pem --> /usr/share/ca-certificates/1108450.pem (1338 bytes)
I1210 00:38:21.615036 1327579 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1210 00:38:21.639576 1327579 ssh_runner.go:195] Run: openssl version
I1210 00:38:21.650469 1327579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11084502.pem && ln -fs /usr/share/ca-certificates/11084502.pem /etc/ssl/certs/11084502.pem"
I1210 00:38:21.672882 1327579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11084502.pem
I1210 00:38:21.678052 1327579 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 9 23:52 /usr/share/ca-certificates/11084502.pem
I1210 00:38:21.678164 1327579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11084502.pem
I1210 00:38:21.686509 1327579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11084502.pem /etc/ssl/certs/3ec20f2e.0"
I1210 00:38:21.698686 1327579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1210 00:38:21.709093 1327579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1210 00:38:21.713391 1327579 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 9 23:44 /usr/share/ca-certificates/minikubeCA.pem
I1210 00:38:21.713504 1327579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1210 00:38:21.725523 1327579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1210 00:38:21.740354 1327579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1108450.pem && ln -fs /usr/share/ca-certificates/1108450.pem /etc/ssl/certs/1108450.pem"
I1210 00:38:21.753840 1327579 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1108450.pem
I1210 00:38:21.758542 1327579 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 9 23:52 /usr/share/ca-certificates/1108450.pem
I1210 00:38:21.758654 1327579 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1108450.pem
I1210 00:38:21.766751 1327579 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1108450.pem /etc/ssl/certs/51391683.0"
I1210 00:38:21.780258 1327579 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1210 00:38:21.784850 1327579 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1210 00:38:21.784950 1327579 kubeadm.go:392] StartCluster: {Name:embed-certs-401547 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-401547 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1210 00:38:21.785071 1327579 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1210 00:38:21.785149 1327579 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1210 00:38:21.867032 1327579 cri.go:89] found id: ""
I1210 00:38:21.867143 1327579 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1210 00:38:21.892752 1327579 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1210 00:38:21.912760 1327579 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1210 00:38:21.912900 1327579 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1210 00:38:21.935775 1327579 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1210 00:38:21.935835 1327579 kubeadm.go:157] found existing configuration files:
I1210 00:38:21.935903 1327579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1210 00:38:21.946682 1327579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1210 00:38:21.946793 1327579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1210 00:38:21.958333 1327579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1210 00:38:21.974976 1327579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1210 00:38:21.975100 1327579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1210 00:38:21.989861 1327579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1210 00:38:22.001256 1327579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1210 00:38:22.001373 1327579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1210 00:38:22.024467 1327579 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1210 00:38:22.036045 1327579 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1210 00:38:22.036138 1327579 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1210 00:38:22.046070 1327579 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1210 00:38:22.114435 1327579 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
I1210 00:38:22.115024 1327579 kubeadm.go:310] [preflight] Running pre-flight checks
I1210 00:38:22.157899 1327579 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I1210 00:38:22.157969 1327579 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1072-aws[0m
I1210 00:38:22.158005 1327579 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I1210 00:38:22.158055 1327579 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1210 00:38:22.158104 1327579 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1210 00:38:22.158151 1327579 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1210 00:38:22.158199 1327579 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1210 00:38:22.158247 1327579 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1210 00:38:22.158300 1327579 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1210 00:38:22.158345 1327579 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1210 00:38:22.158392 1327579 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1210 00:38:22.158438 1327579 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1210 00:38:22.231584 1327579 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1210 00:38:22.231696 1327579 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1210 00:38:22.231788 1327579 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1210 00:38:22.237902 1327579 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1210 00:38:22.243006 1327579 out.go:235] - Generating certificates and keys ...
I1210 00:38:22.243111 1327579 kubeadm.go:310] [certs] Using existing ca certificate authority
I1210 00:38:22.243199 1327579 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I1210 00:38:22.793784 1327579 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I1210 00:38:23.479229 1327579 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I1210 00:38:24.195330 1327579 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I1210 00:38:24.544586 1327579 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I1210 00:38:24.935853 1327579 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I1210 00:38:24.936186 1327579 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-401547 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1210 00:38:25.705529 1327579 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I1210 00:38:25.705902 1327579 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-401547 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
I1210 00:38:21.463157 1317926 logs.go:123] Gathering logs for etcd [f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76] ...
I1210 00:38:21.463230 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76"
I1210 00:38:21.538927 1317926 logs.go:123] Gathering logs for kube-proxy [d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717] ...
I1210 00:38:21.538957 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717"
I1210 00:38:21.592235 1317926 logs.go:123] Gathering logs for kube-controller-manager [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0] ...
I1210 00:38:21.592266 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0"
I1210 00:38:21.675032 1317926 logs.go:123] Gathering logs for kube-controller-manager [6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21] ...
I1210 00:38:21.675072 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21"
I1210 00:38:21.753859 1317926 logs.go:123] Gathering logs for storage-provisioner [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488] ...
I1210 00:38:21.753901 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488"
I1210 00:38:21.804424 1317926 logs.go:123] Gathering logs for kubelet ...
I1210 00:38:21.804453 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1210 00:38:21.881620 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.134990 660 reflector.go:138] object-"kube-system"/"kindnet-token-jppl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jppl8" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.881878 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135109 660 reflector.go:138] object-"default"/"default-token-cjfvv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-cjfvv" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.882149 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135173 660 reflector.go:138] object-"kube-system"/"kube-proxy-token-kh82p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-kh82p" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.882424 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135229 660 reflector.go:138] object-"kube-system"/"storage-provisioner-token-t88lj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-t88lj" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.882731 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135383 660 reflector.go:138] object-"kube-system"/"metrics-server-token-6t6sh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-6t6sh" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.882972 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135444 660 reflector.go:138] object-"kube-system"/"coredns-token-ltmzn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-ltmzn" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.883199 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135505 660 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.883437 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:40 old-k8s-version-452467 kubelet[660]: E1210 00:32:40.135561 660 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-452467" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-452467' and this object
W1210 00:38:21.896238 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:44 old-k8s-version-452467 kubelet[660]: E1210 00:32:44.318795 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.896482 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:44 old-k8s-version-452467 kubelet[660]: E1210 00:32:44.966265 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.900533 1317926 logs.go:138] Found kubelet problem: Dec 10 00:32:59 old-k8s-version-452467 kubelet[660]: E1210 00:32:59.771822 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.901421 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:13 old-k8s-version-452467 kubelet[660]: E1210 00:33:13.748779 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.901869 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:14 old-k8s-version-452467 kubelet[660]: E1210 00:33:14.246302 660 pod_workers.go:191] Error syncing pod 7cbccf04-1bab-4d0f-b6b7-06642841c9ad ("storage-provisioner_kube-system(7cbccf04-1bab-4d0f-b6b7-06642841c9ad)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(7cbccf04-1bab-4d0f-b6b7-06642841c9ad)"
W1210 00:38:21.902467 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:15 old-k8s-version-452467 kubelet[660]: E1210 00:33:15.274011 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.902797 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:16 old-k8s-version-452467 kubelet[660]: E1210 00:33:16.278186 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.903467 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:22 old-k8s-version-452467 kubelet[660]: E1210 00:33:22.652549 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.906013 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:25 old-k8s-version-452467 kubelet[660]: E1210 00:33:25.747166 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.906783 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:36 old-k8s-version-452467 kubelet[660]: E1210 00:33:36.343317 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.906996 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:37 old-k8s-version-452467 kubelet[660]: E1210 00:33:37.723434 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.907348 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:42 old-k8s-version-452467 kubelet[660]: E1210 00:33:42.652410 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.907556 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:49 old-k8s-version-452467 kubelet[660]: E1210 00:33:49.723131 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.907908 1317926 logs.go:138] Found kubelet problem: Dec 10 00:33:53 old-k8s-version-452467 kubelet[660]: E1210 00:33:53.722786 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.908318 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:02 old-k8s-version-452467 kubelet[660]: E1210 00:34:02.722788 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.908973 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:08 old-k8s-version-452467 kubelet[660]: E1210 00:34:08.441533 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.909381 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:12 old-k8s-version-452467 kubelet[660]: E1210 00:34:12.653097 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.911917 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:13 old-k8s-version-452467 kubelet[660]: E1210 00:34:13.735304 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.912302 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:25 old-k8s-version-452467 kubelet[660]: E1210 00:34:25.726049 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.912544 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:26 old-k8s-version-452467 kubelet[660]: E1210 00:34:26.722773 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.912777 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:37 old-k8s-version-452467 kubelet[660]: E1210 00:34:37.726330 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.913222 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:37 old-k8s-version-452467 kubelet[660]: E1210 00:34:37.732566 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.913483 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:48 old-k8s-version-452467 kubelet[660]: E1210 00:34:48.723555 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.914779 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:52 old-k8s-version-452467 kubelet[660]: E1210 00:34:52.558062 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.915148 1317926 logs.go:138] Found kubelet problem: Dec 10 00:34:53 old-k8s-version-452467 kubelet[660]: E1210 00:34:53.558808 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.915365 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:03 old-k8s-version-452467 kubelet[660]: E1210 00:35:03.722557 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.915718 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:06 old-k8s-version-452467 kubelet[660]: E1210 00:35:06.722239 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.915929 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:18 old-k8s-version-452467 kubelet[660]: E1210 00:35:18.722467 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.916281 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:19 old-k8s-version-452467 kubelet[660]: E1210 00:35:19.722179 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.916489 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:31 old-k8s-version-452467 kubelet[660]: E1210 00:35:31.727727 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.916852 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:32 old-k8s-version-452467 kubelet[660]: E1210 00:35:32.722255 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.923582 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:42 old-k8s-version-452467 kubelet[660]: E1210 00:35:42.732073 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1210 00:38:21.923968 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:43 old-k8s-version-452467 kubelet[660]: E1210 00:35:43.722214 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.924180 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:53 old-k8s-version-452467 kubelet[660]: E1210 00:35:53.730978 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.924537 1317926 logs.go:138] Found kubelet problem: Dec 10 00:35:56 old-k8s-version-452467 kubelet[660]: E1210 00:35:56.722173 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.924755 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:07 old-k8s-version-452467 kubelet[660]: E1210 00:36:07.722744 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.925160 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:10 old-k8s-version-452467 kubelet[660]: E1210 00:36:10.722185 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.925383 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:21 old-k8s-version-452467 kubelet[660]: E1210 00:36:21.723017 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.926007 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:24 old-k8s-version-452467 kubelet[660]: E1210 00:36:24.846395 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.926368 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:32 old-k8s-version-452467 kubelet[660]: E1210 00:36:32.652988 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.926576 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:34 old-k8s-version-452467 kubelet[660]: E1210 00:36:34.722539 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.926931 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:45 old-k8s-version-452467 kubelet[660]: E1210 00:36:45.725415 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.927145 1317926 logs.go:138] Found kubelet problem: Dec 10 00:36:49 old-k8s-version-452467 kubelet[660]: E1210 00:36:49.722539 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.927485 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722584 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.927716 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722789 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.927928 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:11 old-k8s-version-452467 kubelet[660]: E1210 00:37:11.722844 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.928279 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:15 old-k8s-version-452467 kubelet[660]: E1210 00:37:15.725496 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.929039 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:25 old-k8s-version-452467 kubelet[660]: E1210 00:37:25.722996 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.929421 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:29 old-k8s-version-452467 kubelet[660]: E1210 00:37:29.722673 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.929773 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722443 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.929980 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722952 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.930207 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.722722 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.930592 1317926 logs.go:138] Found kubelet problem: Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.930992 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.931218 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:21.931635 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:16 old-k8s-version-452467 kubelet[660]: E1210 00:38:16.722163 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:21.931868 1317926 logs.go:138] Found kubelet problem: Dec 10 00:38:18 old-k8s-version-452467 kubelet[660]: E1210 00:38:18.722562 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1210 00:38:21.931893 1317926 logs.go:123] Gathering logs for container status ...
I1210 00:38:21.931919 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1210 00:38:22.019712 1317926 logs.go:123] Gathering logs for kube-apiserver [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d] ...
I1210 00:38:22.019742 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d"
I1210 00:38:22.106326 1317926 logs.go:123] Gathering logs for coredns [1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154] ...
I1210 00:38:22.106366 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154"
I1210 00:38:22.151824 1317926 logs.go:123] Gathering logs for dmesg ...
I1210 00:38:22.151856 1317926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1210 00:38:22.173805 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:22.173830 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1210 00:38:22.173879 1317926 out.go:270] X Problems detected in kubelet:
W1210 00:38:22.173894 1317926 out.go:270] Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:22.173903 1317926 out.go:270] Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:22.173916 1317926 out.go:270] Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1210 00:38:22.173922 1317926 out.go:270] Dec 10 00:38:16 old-k8s-version-452467 kubelet[660]: E1210 00:38:16.722163 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
W1210 00:38:22.173929 1317926 out.go:270] Dec 10 00:38:18 old-k8s-version-452467 kubelet[660]: E1210 00:38:18.722562 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1210 00:38:22.173939 1317926 out.go:358] Setting ErrFile to fd 2...
I1210 00:38:22.173945 1317926 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1210 00:38:26.283770 1327579 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I1210 00:38:27.087582 1327579 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I1210 00:38:27.211815 1327579 kubeadm.go:310] [certs] Generating "sa" key and public key
I1210 00:38:27.212042 1327579 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1210 00:38:28.069282 1327579 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I1210 00:38:28.506808 1327579 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I1210 00:38:28.894027 1327579 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1210 00:38:29.464530 1327579 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1210 00:38:30.275577 1327579 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1210 00:38:30.276292 1327579 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1210 00:38:30.279354 1327579 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1210 00:38:30.282752 1327579 out.go:235] - Booting up control plane ...
I1210 00:38:30.282861 1327579 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I1210 00:38:30.282942 1327579 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1210 00:38:30.283012 1327579 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I1210 00:38:30.307987 1327579 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1210 00:38:30.315135 1327579 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1210 00:38:30.317527 1327579 kubeadm.go:310] [kubelet-start] Starting the kubelet
I1210 00:38:30.416709 1327579 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I1210 00:38:30.416836 1327579 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
I1210 00:38:32.175795 1317926 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1210 00:38:32.189016 1317926 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I1210 00:38:32.192274 1317926 out.go:201]
W1210 00:38:32.194627 1317926 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1210 00:38:32.194671 1317926 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W1210 00:38:32.194691 1317926 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W1210 00:38:32.194703 1317926 out.go:270] *
W1210 00:38:32.195702 1317926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1210 00:38:32.197399 1317926 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
720d8098e8a17 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 0a33ed0127884 dashboard-metrics-scraper-8d5bb5db8-kvpqt
c7fccd82a21f5 ba04bb24b9575 5 minutes ago Running storage-provisioner 2 a1a845ba720a7 storage-provisioner
07bae172f72fe 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 79d1b69954d7b kubernetes-dashboard-cd95d586-7j947
eb6cb704aa86f 2be0bcf609c65 5 minutes ago Running kindnet-cni 1 cedfdc80db485 kindnet-lp585
bf3538284e45b 1611cd07b61d5 5 minutes ago Running busybox 1 3f69c3c1ec426 busybox
8f7d3ec91670d db91994f4ee8f 5 minutes ago Running coredns 1 e72d49792580f coredns-74ff55c5b-zv627
a07efca07bb0c 25a5233254979 5 minutes ago Running kube-proxy 1 eb19442f83677 kube-proxy-brbcq
7348adcdd286c ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 a1a845ba720a7 storage-provisioner
4af50a0427155 05b738aa1bc63 6 minutes ago Running etcd 1 cc5e3542f7fbe etcd-old-k8s-version-452467
d10aa6cf8bb95 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 01770844b62a9 kube-apiserver-old-k8s-version-452467
dba229dd52eb5 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 074cc0e64d43f kube-controller-manager-old-k8s-version-452467
c845e26bdd983 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 eb76a512edf3b kube-scheduler-old-k8s-version-452467
b3303fc80af67 1611cd07b61d5 6 minutes ago Exited busybox 0 2aa56511c007f busybox
1c44a7c764baf db91994f4ee8f 8 minutes ago Exited coredns 0 6c45221af9745 coredns-74ff55c5b-zv627
bb45327cb18ab 2be0bcf609c65 8 minutes ago Exited kindnet-cni 0 0bdd191fc70d5 kindnet-lp585
d6c87e8111e13 25a5233254979 8 minutes ago Exited kube-proxy 0 427f101219662 kube-proxy-brbcq
47c9e05bcabfd e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 6d8874faa8e6b kube-scheduler-old-k8s-version-452467
f32ed98a9f4fb 05b738aa1bc63 8 minutes ago Exited etcd 0 d2b99c571aac4 etcd-old-k8s-version-452467
47554c6ffc0ca 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 d04c0038b579c kube-apiserver-old-k8s-version-452467
6c840ad0200fb 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 9301ce24b8c24 kube-controller-manager-old-k8s-version-452467
==> containerd <==
Dec 10 00:34:51 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:34:51.752587656Z" level=info msg="StartContainer for \"edd265f2eebfd07c17ead26dd34c12097aa2bb1eb70f9e5698734773d03a701a\""
Dec 10 00:34:51 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:34:51.823564434Z" level=info msg="StartContainer for \"edd265f2eebfd07c17ead26dd34c12097aa2bb1eb70f9e5698734773d03a701a\" returns successfully"
Dec 10 00:34:51 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:34:51.851393708Z" level=info msg="shim disconnected" id=edd265f2eebfd07c17ead26dd34c12097aa2bb1eb70f9e5698734773d03a701a namespace=k8s.io
Dec 10 00:34:51 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:34:51.851453874Z" level=warning msg="cleaning up after shim disconnected" id=edd265f2eebfd07c17ead26dd34c12097aa2bb1eb70f9e5698734773d03a701a namespace=k8s.io
Dec 10 00:34:51 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:34:51.851465173Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 10 00:34:52 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:34:52.555930833Z" level=info msg="RemoveContainer for \"311512fc63a087e8a114e3f9ce789f716e580479a2514e1dd348e51f23637b6d\""
Dec 10 00:34:52 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:34:52.568303679Z" level=info msg="RemoveContainer for \"311512fc63a087e8a114e3f9ce789f716e580479a2514e1dd348e51f23637b6d\" returns successfully"
Dec 10 00:35:42 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:35:42.723090344Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:35:42 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:35:42.729609563Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Dec 10 00:35:42 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:35:42.731595498Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Dec 10 00:35:42 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:35:42.731688550Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Dec 10 00:36:23 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:23.728713849Z" level=info msg="CreateContainer within sandbox \"0a33ed012788461840dd7ad3cd3d1c6b84f701c81e27680fdb1c6c139e501fc6\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Dec 10 00:36:23 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:23.747786299Z" level=info msg="CreateContainer within sandbox \"0a33ed012788461840dd7ad3cd3d1c6b84f701c81e27680fdb1c6c139e501fc6\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1\""
Dec 10 00:36:23 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:23.748638200Z" level=info msg="StartContainer for \"720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1\""
Dec 10 00:36:23 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:23.835085107Z" level=info msg="StartContainer for \"720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1\" returns successfully"
Dec 10 00:36:23 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:23.893645480Z" level=info msg="shim disconnected" id=720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1 namespace=k8s.io
Dec 10 00:36:23 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:23.893927598Z" level=warning msg="cleaning up after shim disconnected" id=720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1 namespace=k8s.io
Dec 10 00:36:23 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:23.894054159Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 10 00:36:23 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:23.914354672Z" level=warning msg="cleanup warnings time=\"2024-12-10T00:36:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Dec 10 00:36:24 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:24.847804058Z" level=info msg="RemoveContainer for \"edd265f2eebfd07c17ead26dd34c12097aa2bb1eb70f9e5698734773d03a701a\""
Dec 10 00:36:24 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:36:24.854243697Z" level=info msg="RemoveContainer for \"edd265f2eebfd07c17ead26dd34c12097aa2bb1eb70f9e5698734773d03a701a\" returns successfully"
Dec 10 00:38:31 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:38:31.728559121Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:38:31 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:38:31.737649581Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Dec 10 00:38:31 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:38:31.739786117Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Dec 10 00:38:31 old-k8s-version-452467 containerd[571]: time="2024-12-10T00:38:31.739938425Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [1c44a7c764baf7625d9e18e7cf6e17917073f9dbd723c0e5ebadc919126ca154] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:44764 - 50055 "HINFO IN 3664458908557211423.8772692467305178468. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022512109s
==> coredns [8f7d3ec91670d4f0c3292926c2ec0c5dc02c5e79083f4719970474d576f96051] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:41135 - 19855 "HINFO IN 5840683023185266056.8426415266453738879. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03038157s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I1210 00:33:13.713584 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-10 00:32:43.712892617 +0000 UTC m=+0.031102094) (total time: 30.000586228s):
Trace[2019727887]: [30.000586228s] [30.000586228s] END
E1210 00:33:13.713623 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I1210 00:33:13.713917 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-10 00:32:43.713547649 +0000 UTC m=+0.031757118) (total time: 30.000349901s):
Trace[939984059]: [30.000349901s] [30.000349901s] END
E1210 00:33:13.713934 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I1210 00:33:13.714298 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-10 00:32:43.713859805 +0000 UTC m=+0.032069282) (total time: 30.000423474s):
Trace[911902081]: [30.000423474s] [30.000423474s] END
E1210 00:33:13.714325 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: old-k8s-version-452467
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-452467
kubernetes.io/os=linux
minikube.k8s.io/commit=ef4b1d364e31f576638442321d9f6b3bc3aea9a9
minikube.k8s.io/name=old-k8s-version-452467
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_12_10T00_30_00_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 10 Dec 2024 00:29:56 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-452467
AcquireTime: <unset>
RenewTime: Tue, 10 Dec 2024 00:38:33 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 10 Dec 2024 00:33:40 +0000 Tue, 10 Dec 2024 00:29:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 10 Dec 2024 00:33:40 +0000 Tue, 10 Dec 2024 00:29:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 10 Dec 2024 00:33:40 +0000 Tue, 10 Dec 2024 00:29:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 10 Dec 2024 00:33:40 +0000 Tue, 10 Dec 2024 00:30:14 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-452467
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: ae6c6c826b5c407f99a65d7da8bab149
System UUID: 51b29e6b-c907-4241-9335-54e1ce25b75c
Boot ID: a0c9ee97-1499-4fd7-8795-1e2e1add1b79
Kernel Version: 5.15.0-1072-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m42s
kube-system coredns-74ff55c5b-zv627 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m20s
kube-system etcd-old-k8s-version-452467 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m26s
kube-system kindnet-lp585 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m20s
kube-system kube-apiserver-old-k8s-version-452467 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m26s
kube-system kube-controller-manager-old-k8s-version-452467 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m26s
kube-system kube-proxy-brbcq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m20s
kube-system kube-scheduler-old-k8s-version-452467 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m26s
kube-system metrics-server-9975d5f86-kls2p 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m30s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m18s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-kvpqt 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
kubernetes-dashboard kubernetes-dashboard-cd95d586-7j947 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m45s (x5 over 8m45s) kubelet Node old-k8s-version-452467 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m45s (x4 over 8m45s) kubelet Node old-k8s-version-452467 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m45s (x4 over 8m45s) kubelet Node old-k8s-version-452467 status is now: NodeHasSufficientPID
Normal Starting 8m26s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m26s kubelet Node old-k8s-version-452467 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m26s kubelet Node old-k8s-version-452467 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m26s kubelet Node old-k8s-version-452467 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m26s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m20s kubelet Node old-k8s-version-452467 status is now: NodeReady
Normal Starting 8m19s kube-proxy Starting kube-proxy.
Normal Starting 6m3s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m3s (x8 over 6m3s) kubelet Node old-k8s-version-452467 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m3s (x7 over 6m3s) kubelet Node old-k8s-version-452467 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m3s (x8 over 6m3s) kubelet Node old-k8s-version-452467 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m3s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m51s kube-proxy Starting kube-proxy.
==> dmesg <==
[Dec 9 22:54] hrtimer: interrupt took 43995595 ns
[Dec 9 22:55] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
==> etcd [4af50a042715586c1520c769596150802ec24fea9dededba64bfee32b57db6b4] <==
2024-12-10 00:34:27.504879 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:34:37.499545 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:34:47.498998 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:34:57.499265 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:35:07.499243 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:35:17.498876 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:35:27.499538 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:35:37.500022 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:35:47.499968 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:35:57.499298 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:36:07.499338 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:36:17.500948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:36:27.499269 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:36:37.498991 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:36:47.499301 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:36:57.499245 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:37:07.499403 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:37:17.504621 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:37:27.499691 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:37:37.499248 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:37:47.498831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:37:57.502594 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:38:07.498837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:38:17.498870 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:38:27.499244 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [f32ed98a9f4fb88900b393b8cbc9aafe5ca9657ad611152028d4d3c87423ff76] <==
raft2024/12/10 00:29:50 INFO: 9f0758e1c58a86ed became candidate at term 2
raft2024/12/10 00:29:50 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
raft2024/12/10 00:29:50 INFO: 9f0758e1c58a86ed became leader at term 2
raft2024/12/10 00:29:50 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2024-12-10 00:29:50.993963 I | etcdserver: setting up the initial cluster version to 3.4
2024-12-10 00:29:50.994196 I | etcdserver: published {Name:old-k8s-version-452467 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2024-12-10 00:29:50.994464 I | embed: ready to serve client requests
2024-12-10 00:29:50.994580 I | embed: ready to serve client requests
2024-12-10 00:29:50.996262 I | embed: serving client requests on 127.0.0.1:2379
2024-12-10 00:29:51.001549 N | etcdserver/membership: set the initial cluster version to 3.4
2024-12-10 00:29:51.001773 I | etcdserver/api: enabled capabilities for version 3.4
2024-12-10 00:29:51.010216 I | embed: serving client requests on 192.168.85.2:2379
2024-12-10 00:30:11.004108 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:30:11.100466 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:30:21.100920 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:30:31.100781 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:30:41.100730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:30:51.100831 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:31:01.100894 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:31:11.101015 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:31:21.100652 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:31:31.100794 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:31:41.100928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:31:51.100741 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-10 00:32:01.101392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
00:38:35 up 8:20, 0 users, load average: 1.60, 2.00, 2.52
Linux old-k8s-version-452467 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [bb45327cb18abb34ceec92d61ee6a7fd8e6e8d62482359d549de6a4ccee9d15e] <==
I1210 00:30:18.528218 1 controller.go:365] Waiting for informer caches to sync
I1210 00:30:18.528232 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I1210 00:30:18.729177 1 shared_informer.go:320] Caches are synced for kube-network-policies
I1210 00:30:18.729206 1 metrics.go:61] Registering metrics
I1210 00:30:18.729411 1 controller.go:401] Syncing nftables rules
I1210 00:30:28.526432 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:30:28.526491 1 main.go:301] handling current node
I1210 00:30:38.523151 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:30:38.523189 1 main.go:301] handling current node
I1210 00:30:48.530908 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:30:48.530944 1 main.go:301] handling current node
I1210 00:30:58.531163 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:30:58.531199 1 main.go:301] handling current node
I1210 00:31:08.523003 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:31:08.523065 1 main.go:301] handling current node
I1210 00:31:18.524568 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:31:18.524598 1 main.go:301] handling current node
I1210 00:31:28.530349 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:31:28.530390 1 main.go:301] handling current node
I1210 00:31:38.526829 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:31:38.526936 1 main.go:301] handling current node
I1210 00:31:48.530296 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:31:48.530331 1 main.go:301] handling current node
I1210 00:31:58.523798 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:31:58.524001 1 main.go:301] handling current node
==> kindnet [eb6cb704aa86f1b2682524c7942049c844d7571d3c1b8d00ef0e4a7b9fe13469] <==
I1210 00:36:34.924271 1 main.go:301] handling current node
I1210 00:36:44.922268 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:36:44.922303 1 main.go:301] handling current node
I1210 00:36:54.929363 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:36:54.929402 1 main.go:301] handling current node
I1210 00:37:04.930662 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:37:04.930698 1 main.go:301] handling current node
I1210 00:37:14.923930 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:37:14.923964 1 main.go:301] handling current node
I1210 00:37:24.929473 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:37:24.929585 1 main.go:301] handling current node
I1210 00:37:34.930332 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:37:34.930367 1 main.go:301] handling current node
I1210 00:37:44.922499 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:37:44.922534 1 main.go:301] handling current node
I1210 00:37:54.929417 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:37:54.929456 1 main.go:301] handling current node
I1210 00:38:04.929452 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:38:04.929552 1 main.go:301] handling current node
I1210 00:38:14.926248 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:38:14.926285 1 main.go:301] handling current node
I1210 00:38:24.929879 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:38:24.930130 1 main.go:301] handling current node
I1210 00:38:34.931809 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1210 00:38:34.931844 1 main.go:301] handling current node
==> kube-apiserver [47554c6ffc0cae06e0029ee91177f297527aafae5d849b0860de4206a766cf85] <==
I1210 00:29:57.233314 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1210 00:29:57.233373 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1210 00:29:57.246502 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1210 00:29:57.250383 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1210 00:29:57.250409 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1210 00:29:57.779691 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1210 00:29:57.835369 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1210 00:29:57.952135 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
I1210 00:29:57.953325 1 controller.go:606] quota admission added evaluator for: endpoints
I1210 00:29:57.957522 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1210 00:29:58.858682 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1210 00:29:59.734644 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1210 00:29:59.811390 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1210 00:30:08.237511 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1210 00:30:14.847435 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1210 00:30:14.871414 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1210 00:30:35.704893 1 client.go:360] parsed scheme: "passthrough"
I1210 00:30:35.704948 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:30:35.704956 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1210 00:31:11.853021 1 client.go:360] parsed scheme: "passthrough"
I1210 00:31:11.853101 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:31:11.853404 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1210 00:31:47.761540 1 client.go:360] parsed scheme: "passthrough"
I1210 00:31:47.761587 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:31:47.761605 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [d10aa6cf8bb958dbff6f52815b3163a3e2b74cb128fee3b5f522f9c38e44161d] <==
I1210 00:34:36.043486 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:34:36.043500 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1210 00:35:19.376331 1 client.go:360] parsed scheme: "passthrough"
I1210 00:35:19.376378 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:35:19.376388 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1210 00:35:44.241585 1 handler_proxy.go:102] no RequestInfo found in the context
E1210 00:35:44.241751 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1210 00:35:44.241786 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1210 00:35:57.671740 1 client.go:360] parsed scheme: "passthrough"
I1210 00:35:57.671783 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:35:57.671816 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1210 00:36:40.317798 1 client.go:360] parsed scheme: "passthrough"
I1210 00:36:40.317841 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:36:40.317850 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1210 00:37:16.404615 1 client.go:360] parsed scheme: "passthrough"
I1210 00:37:16.404751 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:37:16.404792 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1210 00:37:41.154947 1 handler_proxy.go:102] no RequestInfo found in the context
E1210 00:37:41.155020 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1210 00:37:41.155036 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1210 00:37:52.312502 1 client.go:360] parsed scheme: "passthrough"
I1210 00:37:52.312548 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1210 00:37:52.312563 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [6c840ad0200fba37339ce716ebb6cdf3cd64001c7a70a1d591157612d9b79f21] <==
I1210 00:30:14.880776 1 range_allocator.go:373] Set node old-k8s-version-452467 PodCIDR to [10.244.0.0/24]
I1210 00:30:14.890912 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I1210 00:30:14.945776 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-zv627"
I1210 00:30:14.966307 1 shared_informer.go:247] Caches are synced for expand
I1210 00:30:14.966520 1 shared_informer.go:247] Caches are synced for persistent volume
E1210 00:30:14.967600 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I1210 00:30:14.982625 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-brbcq"
I1210 00:30:14.995663 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lp585"
I1210 00:30:14.993680 1 shared_informer.go:247] Caches are synced for attach detach
I1210 00:30:14.993695 1 shared_informer.go:247] Caches are synced for PVC protection
I1210 00:30:15.024596 1 shared_informer.go:247] Caches are synced for resource quota
I1210 00:30:15.025389 1 shared_informer.go:247] Caches are synced for resource quota
E1210 00:30:15.040994 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I1210 00:30:15.048933 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-x8mtp"
I1210 00:30:15.055121 1 shared_informer.go:247] Caches are synced for stateful set
E1210 00:30:15.159830 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"2ecc94b4-08ff-4870-99dc-908c03da329f", ResourceVersion:"268", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63869387399, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d736c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d736e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001d73700), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001e28000), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d73
720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d73740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d73780)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001d34f60), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001e1e398), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a3dea0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400011af30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001e1e3e8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
E1210 00:30:15.163162 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"44d15bfa-b3fc-401c-8e80-c357bcb4d4cf", ResourceVersion:"288", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63869387400, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d737e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d73800)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001d73820), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d73840), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d73860), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d73880), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241108-5c6d2daf", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d738a0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d738e0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001d34fc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001e1e5e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a3df10), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400011af50)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001e1e630)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I1210 00:30:15.182427 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1210 00:30:15.483718 1 shared_informer.go:247] Caches are synced for garbage collector
I1210 00:30:15.492182 1 shared_informer.go:247] Caches are synced for garbage collector
I1210 00:30:15.492221 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1210 00:30:16.605679 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I1210 00:30:16.650253 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-x8mtp"
I1210 00:30:19.769661 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I1210 00:32:03.654717 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
==> kube-controller-manager [dba229dd52eb5898c5632240a141417354adcb8b40031a7991c546d3d524b2e0] <==
E1210 00:34:30.787524 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1210 00:34:37.884549 1 request.go:655] Throttling request took 1.048471907s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W1210 00:34:38.736043 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1210 00:35:01.289785 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1210 00:35:10.386650 1 request.go:655] Throttling request took 1.048226997s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W1210 00:35:11.238211 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1210 00:35:31.791559 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1210 00:35:42.888884 1 request.go:655] Throttling request took 1.048431954s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v2beta2?timeout=32s
W1210 00:35:43.740313 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1210 00:36:02.293475 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1210 00:36:15.390774 1 request.go:655] Throttling request took 1.04825961s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W1210 00:36:16.242335 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1210 00:36:32.795390 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1210 00:36:47.892882 1 request.go:655] Throttling request took 1.048117483s, request: GET:https://192.168.85.2:8443/apis/autoscaling/v1?timeout=32s
W1210 00:36:48.745129 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1210 00:37:03.297245 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1210 00:37:20.395581 1 request.go:655] Throttling request took 1.048545615s, request: GET:https://192.168.85.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
W1210 00:37:21.246997 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1210 00:37:33.801077 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1210 00:37:52.897549 1 request.go:655] Throttling request took 1.048389253s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W1210 00:37:53.749133 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1210 00:38:04.304969 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1210 00:38:25.399706 1 request.go:655] Throttling request took 1.048007173s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W1210 00:38:26.251563 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1210 00:38:34.811014 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-proxy [a07efca07bb0cf28e7064fbef0acf25270e471895a6ce822908b95fbb2c3088e] <==
I1210 00:32:43.787085 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I1210 00:32:43.787424 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W1210 00:32:43.808646 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1210 00:32:43.808747 1 server_others.go:185] Using iptables Proxier.
I1210 00:32:43.808986 1 server.go:650] Version: v1.20.0
I1210 00:32:43.809857 1 config.go:315] Starting service config controller
I1210 00:32:43.809877 1 shared_informer.go:240] Waiting for caches to sync for service config
I1210 00:32:43.809895 1 config.go:224] Starting endpoint slice config controller
I1210 00:32:43.809901 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1210 00:32:43.910066 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1210 00:32:43.910316 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [d6c87e8111e1392f5af7f9b3d57f23d8ec4f3831f51f1190a558c98b45ccc717] <==
I1210 00:30:15.940863 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I1210 00:30:15.941010 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W1210 00:30:15.989331 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1210 00:30:15.989443 1 server_others.go:185] Using iptables Proxier.
I1210 00:30:15.989759 1 server.go:650] Version: v1.20.0
I1210 00:30:15.990373 1 config.go:315] Starting service config controller
I1210 00:30:15.990385 1 shared_informer.go:240] Waiting for caches to sync for service config
I1210 00:30:15.990407 1 config.go:224] Starting endpoint slice config controller
I1210 00:30:15.990411 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1210 00:30:16.091245 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1210 00:30:16.091308 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [47c9e05bcabfd135c8f0ae81ff8ab5e50559a9695421b96f927ecf058b468869] <==
W1210 00:29:56.390426 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1210 00:29:56.390540 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1210 00:29:56.390571 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1210 00:29:56.390612 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1210 00:29:56.453965 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1210 00:29:56.460531 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1210 00:29:56.467138 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1210 00:29:56.473457 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E1210 00:29:56.494483 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1210 00:29:56.502764 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1210 00:29:56.503064 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1210 00:29:56.503292 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1210 00:29:56.503477 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1210 00:29:56.503654 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1210 00:29:56.503822 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1210 00:29:56.504064 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1210 00:29:56.504243 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1210 00:29:56.504428 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1210 00:29:56.504596 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1210 00:29:56.504729 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1210 00:29:57.367337 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1210 00:29:57.450852 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1210 00:29:57.457162 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1210 00:29:57.563764 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
I1210 00:29:57.973819 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [c845e26bdd983839bbea7c133b477af031e928fe8e8d473daa9a0f27b090f077] <==
I1210 00:32:36.451980 1 serving.go:331] Generated self-signed cert in-memory
I1210 00:32:40.706739 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1210 00:32:40.707327 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I1210 00:32:40.707337 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I1210 00:32:40.707355 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1210 00:32:40.707452 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1210 00:32:40.707457 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1210 00:32:40.707468 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1210 00:32:40.707472 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1210 00:32:40.809845 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I1210 00:32:40.809882 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
I1210 00:32:40.810002 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Dec 10 00:37:00 old-k8s-version-452467 kubelet[660]: E1210 00:37:00.722789 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:37:11 old-k8s-version-452467 kubelet[660]: E1210 00:37:11.722844 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:37:15 old-k8s-version-452467 kubelet[660]: I1210 00:37:15.721990 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1
Dec 10 00:37:15 old-k8s-version-452467 kubelet[660]: E1210 00:37:15.725496 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:37:25 old-k8s-version-452467 kubelet[660]: E1210 00:37:25.722996 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:37:29 old-k8s-version-452467 kubelet[660]: I1210 00:37:29.721822 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1
Dec 10 00:37:29 old-k8s-version-452467 kubelet[660]: E1210 00:37:29.722673 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: I1210 00:37:40.722136 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1
Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722443 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:37:40 old-k8s-version-452467 kubelet[660]: E1210 00:37:40.722952 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.722722 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: I1210 00:37:53.723629 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1
Dec 10 00:37:53 old-k8s-version-452467 kubelet[660]: E1210 00:37:53.723929 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: I1210 00:38:05.721902 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1
Dec 10 00:38:05 old-k8s-version-452467 kubelet[660]: E1210 00:38:05.722301 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:38:06 old-k8s-version-452467 kubelet[660]: E1210 00:38:06.723507 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:38:16 old-k8s-version-452467 kubelet[660]: I1210 00:38:16.721778 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1
Dec 10 00:38:16 old-k8s-version-452467 kubelet[660]: E1210 00:38:16.722163 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:38:18 old-k8s-version-452467 kubelet[660]: E1210 00:38:18.722562 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 10 00:38:30 old-k8s-version-452467 kubelet[660]: I1210 00:38:30.722505 660 scope.go:95] [topologymanager] RemoveContainer - Container ID: 720d8098e8a170fe438c432b737544316b8ca98107d223aab9e4fa0a2be698d1
Dec 10 00:38:30 old-k8s-version-452467 kubelet[660]: E1210 00:38:30.723486 660 pod_workers.go:191] Error syncing pod 235fa363-058f-4e79-b793-e0f0aa028529 ("dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-kvpqt_kubernetes-dashboard(235fa363-058f-4e79-b793-e0f0aa028529)"
Dec 10 00:38:31 old-k8s-version-452467 kubelet[660]: E1210 00:38:31.740298 660 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Dec 10 00:38:31 old-k8s-version-452467 kubelet[660]: E1210 00:38:31.740777 660 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Dec 10 00:38:31 old-k8s-version-452467 kubelet[660]: E1210 00:38:31.741386 660 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-6t6sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-kls2p_kube-system(d6789e9
8-ba5e-4b73-a3b9-83f47c96ef54): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Dec 10 00:38:31 old-k8s-version-452467 kubelet[660]: E1210 00:38:31.741589 660 pod_workers.go:191] Error syncing pod d6789e98-ba5e-4b73-a3b9-83f47c96ef54 ("metrics-server-9975d5f86-kls2p_kube-system(d6789e98-ba5e-4b73-a3b9-83f47c96ef54)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
==> kubernetes-dashboard [07bae172f72fe4434e1301946a5ddcdbb80ec1ab3271d5a56fdb88b63b94dab6] <==
2024/12/10 00:33:05 Using namespace: kubernetes-dashboard
2024/12/10 00:33:05 Using in-cluster config to connect to apiserver
2024/12/10 00:33:05 Using secret token for csrf signing
2024/12/10 00:33:05 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/12/10 00:33:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/12/10 00:33:06 Successful initial request to the apiserver, version: v1.20.0
2024/12/10 00:33:06 Generating JWE encryption key
2024/12/10 00:33:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/12/10 00:33:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/12/10 00:33:07 Initializing JWE encryption key from synchronized object
2024/12/10 00:33:07 Creating in-cluster Sidecar client
2024/12/10 00:33:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:33:07 Serving insecurely on HTTP port: 9090
2024/12/10 00:33:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:34:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:34:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:35:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:35:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:36:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:36:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:37:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:37:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:38:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/10 00:33:05 Starting overwatch
==> storage-provisioner [7348adcdd286c61f4ccf093e92ad25ca6486e59b75c543d43162018687d55d41] <==
I1210 00:32:43.537736 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1210 00:33:13.539197 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [c7fccd82a21f5de7610e961cd5f8fd57f24b93834644b879f4b9f9808b32d488] <==
I1210 00:33:28.843484 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1210 00:33:28.862269 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1210 00:33:28.862339 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1210 00:33:46.349291 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1210 00:33:46.349571 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-452467_208b6159-2084-4217-8cef-455cbbccdc51!
I1210 00:33:46.349724 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cfccc82a-5197-4de2-a773-7997fc6e1c33", APIVersion:"v1", ResourceVersion:"856", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-452467_208b6159-2084-4217-8cef-455cbbccdc51 became leader
I1210 00:33:46.450776 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-452467_208b6159-2084-4217-8cef-455cbbccdc51!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-452467 -n old-k8s-version-452467
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-452467 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-kls2p
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-452467 describe pod metrics-server-9975d5f86-kls2p
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-452467 describe pod metrics-server-9975d5f86-kls2p: exit status 1 (137.104515ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-kls2p" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-452467 describe pod metrics-server-9975d5f86-kls2p: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.95s)