=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-098617 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E1209 23:14:23.257917 7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/functional-463603/client.crt: no such file or directory" logger="UnhandledError"
E1209 23:14:49.556284 7684 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/addons-013873/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-098617 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.207023195s)
-- stdout --
* [old-k8s-version-098617] minikube v1.34.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=19888
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-098617" primary control-plane node in "old-k8s-version-098617" cluster
* Pulling base image v0.0.45-1730888964-19917 ...
* Restarting existing docker container for "old-k8s-version-098617" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
* Verifying Kubernetes components...
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-098617 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
-- /stdout --
** stderr **
I1209 23:14:11.391285 214436 out.go:345] Setting OutFile to fd 1 ...
I1209 23:14:11.391523 214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:14:11.391549 214436 out.go:358] Setting ErrFile to fd 2...
I1209 23:14:11.391568 214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:14:11.391860 214436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
I1209 23:14:11.392303 214436 out.go:352] Setting JSON to false
I1209 23:14:11.393201 214436 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3398,"bootTime":1733782653,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1209 23:14:11.393301 214436 start.go:139] virtualization:
I1209 23:14:11.395535 214436 out.go:177] * [old-k8s-version-098617] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1209 23:14:11.397129 214436 out.go:177] - MINIKUBE_LOCATION=19888
I1209 23:14:11.397204 214436 notify.go:220] Checking for updates...
I1209 23:14:11.399531 214436 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 23:14:11.402745 214436 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
I1209 23:14:11.405419 214436 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
I1209 23:14:11.408748 214436 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1209 23:14:11.411805 214436 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1209 23:14:11.414269 214436 config.go:182] Loaded profile config "old-k8s-version-098617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1209 23:14:11.417089 214436 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
I1209 23:14:11.419067 214436 driver.go:394] Setting default libvirt URI to qemu:///system
I1209 23:14:11.464567 214436 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
I1209 23:14:11.464682 214436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 23:14:11.556991 214436 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-12-09 23:14:11.547833129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1209 23:14:11.557105 214436 docker.go:318] overlay module found
I1209 23:14:11.561167 214436 out.go:177] * Using the docker driver based on existing profile
I1209 23:14:11.562934 214436 start.go:297] selected driver: docker
I1209 23:14:11.562949 214436 start.go:901] validating driver "docker" against &{Name:old-k8s-version-098617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 23:14:11.563068 214436 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 23:14:11.563778 214436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 23:14:11.634037 214436 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:59 SystemTime:2024-12-09 23:14:11.624366058 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1209 23:14:11.634458 214436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 23:14:11.634486 214436 cni.go:84] Creating CNI manager for ""
I1209 23:14:11.634531 214436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 23:14:11.634572 214436 start.go:340] cluster config:
{Name:old-k8s-version-098617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 23:14:11.637401 214436 out.go:177] * Starting "old-k8s-version-098617" primary control-plane node in "old-k8s-version-098617" cluster
I1209 23:14:11.638884 214436 cache.go:121] Beginning downloading kic base image for docker with containerd
I1209 23:14:11.640411 214436 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
I1209 23:14:11.641827 214436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1209 23:14:11.641887 214436 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I1209 23:14:11.641896 214436 cache.go:56] Caching tarball of preloaded images
I1209 23:14:11.641982 214436 preload.go:172] Found /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1209 23:14:11.641993 214436 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I1209 23:14:11.642103 214436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/config.json ...
I1209 23:14:11.642309 214436 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
I1209 23:14:11.670121 214436 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
I1209 23:14:11.670146 214436 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
I1209 23:14:11.670161 214436 cache.go:194] Successfully downloaded all kic artifacts
I1209 23:14:11.670184 214436 start.go:360] acquireMachinesLock for old-k8s-version-098617: {Name:mk653849e4ebf1e5c8bcd0acd3ea80cca1cdb2c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 23:14:11.670246 214436 start.go:364] duration metric: took 37.284µs to acquireMachinesLock for "old-k8s-version-098617"
I1209 23:14:11.670273 214436 start.go:96] Skipping create...Using existing machine configuration
I1209 23:14:11.670282 214436 fix.go:54] fixHost starting:
I1209 23:14:11.670528 214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
I1209 23:14:11.700397 214436 fix.go:112] recreateIfNeeded on old-k8s-version-098617: state=Stopped err=<nil>
W1209 23:14:11.700425 214436 fix.go:138] unexpected machine state, will restart: <nil>
I1209 23:14:11.702392 214436 out.go:177] * Restarting existing docker container for "old-k8s-version-098617" ...
I1209 23:14:11.703841 214436 cli_runner.go:164] Run: docker start old-k8s-version-098617
I1209 23:14:12.053321 214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
I1209 23:14:12.087469 214436 kic.go:430] container "old-k8s-version-098617" state is running.
I1209 23:14:12.088004 214436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098617
I1209 23:14:12.126927 214436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/config.json ...
I1209 23:14:12.127237 214436 machine.go:93] provisionDockerMachine start ...
I1209 23:14:12.127320 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:12.162611 214436 main.go:141] libmachine: Using SSH client type: native
I1209 23:14:12.162965 214436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I1209 23:14:12.162983 214436 main.go:141] libmachine: About to run SSH command:
hostname
I1209 23:14:12.165237 214436 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1209 23:14:15.298259 214436 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098617
I1209 23:14:15.298287 214436 ubuntu.go:169] provisioning hostname "old-k8s-version-098617"
I1209 23:14:15.298366 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:15.325145 214436 main.go:141] libmachine: Using SSH client type: native
I1209 23:14:15.325420 214436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I1209 23:14:15.325438 214436 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-098617 && echo "old-k8s-version-098617" | sudo tee /etc/hostname
I1209 23:14:15.468496 214436 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098617
I1209 23:14:15.468668 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:15.497061 214436 main.go:141] libmachine: Using SSH client type: native
I1209 23:14:15.497313 214436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I1209 23:14:15.497339 214436 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-098617' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098617/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-098617' | sudo tee -a /etc/hosts;
fi
fi
I1209 23:14:15.622957 214436 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1209 23:14:15.622987 214436 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19888-2244/.minikube CaCertPath:/home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19888-2244/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19888-2244/.minikube}
I1209 23:14:15.623013 214436 ubuntu.go:177] setting up certificates
I1209 23:14:15.623024 214436 provision.go:84] configureAuth start
I1209 23:14:15.623086 214436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098617
I1209 23:14:15.642686 214436 provision.go:143] copyHostCerts
I1209 23:14:15.642797 214436 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-2244/.minikube/cert.pem, removing ...
I1209 23:14:15.642818 214436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-2244/.minikube/cert.pem
I1209 23:14:15.642893 214436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19888-2244/.minikube/cert.pem (1123 bytes)
I1209 23:14:15.643007 214436 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-2244/.minikube/key.pem, removing ...
I1209 23:14:15.643019 214436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-2244/.minikube/key.pem
I1209 23:14:15.643049 214436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19888-2244/.minikube/key.pem (1675 bytes)
I1209 23:14:15.643125 214436 exec_runner.go:144] found /home/jenkins/minikube-integration/19888-2244/.minikube/ca.pem, removing ...
I1209 23:14:15.643139 214436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19888-2244/.minikube/ca.pem
I1209 23:14:15.643166 214436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19888-2244/.minikube/ca.pem (1078 bytes)
I1209 23:14:15.643228 214436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19888-2244/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098617 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-098617]
I1209 23:14:16.075534 214436 provision.go:177] copyRemoteCerts
I1209 23:14:16.075664 214436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1209 23:14:16.075747 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:16.094331 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:16.196765 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1209 23:14:16.221393 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1209 23:14:16.253256 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1209 23:14:16.311867 214436 provision.go:87] duration metric: took 688.824014ms to configureAuth
I1209 23:14:16.311894 214436 ubuntu.go:193] setting minikube options for container-runtime
I1209 23:14:16.312127 214436 config.go:182] Loaded profile config "old-k8s-version-098617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1209 23:14:16.312144 214436 machine.go:96] duration metric: took 4.184897091s to provisionDockerMachine
I1209 23:14:16.312154 214436 start.go:293] postStartSetup for "old-k8s-version-098617" (driver="docker")
I1209 23:14:16.312178 214436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1209 23:14:16.313227 214436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1209 23:14:16.313307 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:16.341954 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:16.438869 214436 ssh_runner.go:195] Run: cat /etc/os-release
I1209 23:14:16.444177 214436 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1209 23:14:16.444208 214436 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1209 23:14:16.444219 214436 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1209 23:14:16.444226 214436 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1209 23:14:16.444237 214436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-2244/.minikube/addons for local assets ...
I1209 23:14:16.444291 214436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19888-2244/.minikube/files for local assets ...
I1209 23:14:16.444368 214436 filesync.go:149] local asset: /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/ssl/certs/76842.pem -> 76842.pem in /etc/ssl/certs
I1209 23:14:16.444471 214436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1209 23:14:16.455543 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/ssl/certs/76842.pem --> /etc/ssl/certs/76842.pem (1708 bytes)
I1209 23:14:16.493529 214436 start.go:296] duration metric: took 181.346912ms for postStartSetup
I1209 23:14:16.493609 214436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1209 23:14:16.493670 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:16.513477 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:16.611231 214436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1209 23:14:16.619129 214436 fix.go:56] duration metric: took 4.948839099s for fixHost
I1209 23:14:16.619156 214436 start.go:83] releasing machines lock for "old-k8s-version-098617", held for 4.948897133s
I1209 23:14:16.619226 214436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098617
I1209 23:14:16.741129 214436 ssh_runner.go:195] Run: cat /version.json
I1209 23:14:16.741179 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:16.741431 214436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1209 23:14:16.741488 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:16.820320 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:16.913284 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:16.946999 214436 ssh_runner.go:195] Run: systemctl --version
I1209 23:14:16.962105 214436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1209 23:14:17.219065 214436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1209 23:14:17.252608 214436 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1209 23:14:17.252693 214436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1209 23:14:17.269920 214436 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1209 23:14:17.269945 214436 start.go:495] detecting cgroup driver to use...
I1209 23:14:17.269979 214436 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1209 23:14:17.270031 214436 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1209 23:14:17.292290 214436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1209 23:14:17.313786 214436 docker.go:217] disabling cri-docker service (if available) ...
I1209 23:14:17.313851 214436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1209 23:14:17.333858 214436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1209 23:14:17.356785 214436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1209 23:14:17.508643 214436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1209 23:14:17.648439 214436 docker.go:233] disabling docker service ...
I1209 23:14:17.648509 214436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1209 23:14:17.662611 214436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1209 23:14:17.676247 214436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1209 23:14:17.784732 214436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1209 23:14:17.900348 214436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1209 23:14:17.914362 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1209 23:14:17.939373 214436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1209 23:14:17.950041 214436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1209 23:14:17.962877 214436 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1209 23:14:17.962952 214436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1209 23:14:17.973508 214436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 23:14:17.988371 214436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1209 23:14:18.002290 214436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 23:14:18.016703 214436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1209 23:14:18.027810 214436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1209 23:14:18.039465 214436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1209 23:14:18.049980 214436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1209 23:14:18.060540 214436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 23:14:18.164068 214436 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1209 23:14:18.369830 214436 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1209 23:14:18.369976 214436 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1209 23:14:18.377528 214436 start.go:563] Will wait 60s for crictl version
I1209 23:14:18.377645 214436 ssh_runner.go:195] Run: which crictl
I1209 23:14:18.381626 214436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1209 23:14:18.429420 214436 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1209 23:14:18.429569 214436 ssh_runner.go:195] Run: containerd --version
I1209 23:14:18.452893 214436 ssh_runner.go:195] Run: containerd --version
I1209 23:14:18.476220 214436 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
I1209 23:14:18.477833 214436 cli_runner.go:164] Run: docker network inspect old-k8s-version-098617 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 23:14:18.503995 214436 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1209 23:14:18.510452 214436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 23:14:18.523269 214436 kubeadm.go:883] updating cluster {Name:old-k8s-version-098617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1209 23:14:18.523381 214436 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1209 23:14:18.523439 214436 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 23:14:18.567611 214436 containerd.go:627] all images are preloaded for containerd runtime.
I1209 23:14:18.567633 214436 containerd.go:534] Images already preloaded, skipping extraction
I1209 23:14:18.567704 214436 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 23:14:18.616359 214436 containerd.go:627] all images are preloaded for containerd runtime.
I1209 23:14:18.616429 214436 cache_images.go:84] Images are preloaded, skipping loading
I1209 23:14:18.616467 214436 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I1209 23:14:18.616628 214436 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-098617 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1209 23:14:18.616731 214436 ssh_runner.go:195] Run: sudo crictl info
I1209 23:14:18.665999 214436 cni.go:84] Creating CNI manager for ""
I1209 23:14:18.666019 214436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 23:14:18.666028 214436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1209 23:14:18.666049 214436 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098617 NodeName:old-k8s-version-098617 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1209 23:14:18.666177 214436 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-098617"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1209 23:14:18.666244 214436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I1209 23:14:18.678867 214436 binaries.go:44] Found k8s binaries, skipping transfer
I1209 23:14:18.678984 214436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1209 23:14:18.688170 214436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I1209 23:14:18.712331 214436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1209 23:14:18.736122 214436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I1209 23:14:18.756576 214436 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1209 23:14:18.760308 214436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 23:14:18.770863 214436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 23:14:18.884998 214436 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 23:14:18.905477 214436 certs.go:68] Setting up /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617 for IP: 192.168.76.2
I1209 23:14:18.905500 214436 certs.go:194] generating shared ca certs ...
I1209 23:14:18.905517 214436 certs.go:226] acquiring lock for ca certs: {Name:mk5e5b08227e0c37038d2f29a9a492383a5cd230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 23:14:18.905651 214436 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19888-2244/.minikube/ca.key
I1209 23:14:18.905707 214436 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19888-2244/.minikube/proxy-client-ca.key
I1209 23:14:18.905717 214436 certs.go:256] generating profile certs ...
I1209 23:14:18.905802 214436 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/client.key
I1209 23:14:18.905865 214436 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/apiserver.key.982d6abc
I1209 23:14:18.905913 214436 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/proxy-client.key
I1209 23:14:18.906034 214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/7684.pem (1338 bytes)
W1209 23:14:18.906069 214436 certs.go:480] ignoring /home/jenkins/minikube-integration/19888-2244/.minikube/certs/7684_empty.pem, impossibly tiny 0 bytes
I1209 23:14:18.906080 214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca-key.pem (1675 bytes)
I1209 23:14:18.906104 214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/ca.pem (1078 bytes)
I1209 23:14:18.906132 214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/cert.pem (1123 bytes)
I1209 23:14:18.906156 214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/certs/key.pem (1675 bytes)
I1209 23:14:18.906204 214436 certs.go:484] found cert: /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/ssl/certs/76842.pem (1708 bytes)
I1209 23:14:18.906853 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1209 23:14:18.945538 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1209 23:14:18.991946 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1209 23:14:19.032635 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1209 23:14:19.089669 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I1209 23:14:19.117826 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1209 23:14:19.144134 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1209 23:14:19.170101 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/old-k8s-version-098617/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1209 23:14:19.195283 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/files/etc/ssl/certs/76842.pem --> /usr/share/ca-certificates/76842.pem (1708 bytes)
I1209 23:14:19.220701 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1209 23:14:19.245981 214436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19888-2244/.minikube/certs/7684.pem --> /usr/share/ca-certificates/7684.pem (1338 bytes)
I1209 23:14:19.270581 214436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1209 23:14:19.289767 214436 ssh_runner.go:195] Run: openssl version
I1209 23:14:19.295727 214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/76842.pem && ln -fs /usr/share/ca-certificates/76842.pem /etc/ssl/certs/76842.pem"
I1209 23:14:19.305797 214436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/76842.pem
I1209 23:14:19.309614 214436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 9 22:34 /usr/share/ca-certificates/76842.pem
I1209 23:14:19.309678 214436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/76842.pem
I1209 23:14:19.317916 214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/76842.pem /etc/ssl/certs/3ec20f2e.0"
I1209 23:14:19.327464 214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1209 23:14:19.337436 214436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1209 23:14:19.341829 214436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 9 22:26 /usr/share/ca-certificates/minikubeCA.pem
I1209 23:14:19.341900 214436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1209 23:14:19.349014 214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1209 23:14:19.358319 214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7684.pem && ln -fs /usr/share/ca-certificates/7684.pem /etc/ssl/certs/7684.pem"
I1209 23:14:19.368341 214436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7684.pem
I1209 23:14:19.372146 214436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 9 22:34 /usr/share/ca-certificates/7684.pem
I1209 23:14:19.372221 214436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7684.pem
I1209 23:14:19.379427 214436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7684.pem /etc/ssl/certs/51391683.0"
I1209 23:14:19.389001 214436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1209 23:14:19.393080 214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1209 23:14:19.400365 214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1209 23:14:19.407476 214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1209 23:14:19.414411 214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1209 23:14:19.421615 214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1209 23:14:19.428805 214436 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1209 23:14:19.435961 214436 kubeadm.go:392] StartCluster: {Name:old-k8s-version-098617 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-098617 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 23:14:19.436063 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1209 23:14:19.436125 214436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1209 23:14:19.489532 214436 cri.go:89] found id: "7b6f900a1282a6756e0904630740646ec98f08e7e8e41c3c55e56a30dba7bc7a"
I1209 23:14:19.489554 214436 cri.go:89] found id: "99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
I1209 23:14:19.489560 214436 cri.go:89] found id: "5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
I1209 23:14:19.489564 214436 cri.go:89] found id: "4298e59fb9c26bc4b6c5f5daf349a3292840d4b30dcb1cb11c299810d0ed0451"
I1209 23:14:19.489567 214436 cri.go:89] found id: "a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
I1209 23:14:19.489571 214436 cri.go:89] found id: "5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
I1209 23:14:19.489574 214436 cri.go:89] found id: "063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
I1209 23:14:19.489577 214436 cri.go:89] found id: "f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
I1209 23:14:19.489580 214436 cri.go:89] found id: "6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
I1209 23:14:19.489585 214436 cri.go:89] found id: ""
I1209 23:14:19.489639 214436 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1209 23:14:19.502454 214436 cri.go:116] JSON = null
W1209 23:14:19.502502 214436 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
I1209 23:14:19.502565 214436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1209 23:14:19.513471 214436 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1209 23:14:19.513494 214436 kubeadm.go:593] restartPrimaryControlPlane start ...
I1209 23:14:19.513547 214436 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1209 23:14:19.522730 214436 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1209 23:14:19.523161 214436 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-098617" does not appear in /home/jenkins/minikube-integration/19888-2244/kubeconfig
I1209 23:14:19.523268 214436 kubeconfig.go:62] /home/jenkins/minikube-integration/19888-2244/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-098617" cluster setting kubeconfig missing "old-k8s-version-098617" context setting]
I1209 23:14:19.523580 214436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-2244/kubeconfig: {Name:mke0607d72baeb496e6e8b72464517e7e676b09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 23:14:19.524796 214436 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1209 23:14:19.534596 214436 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I1209 23:14:19.534627 214436 kubeadm.go:597] duration metric: took 21.127623ms to restartPrimaryControlPlane
I1209 23:14:19.534637 214436 kubeadm.go:394] duration metric: took 98.686693ms to StartCluster
I1209 23:14:19.534651 214436 settings.go:142] acquiring lock: {Name:mk8e4d73490ddd425d99594b7cef42b0539f618d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 23:14:19.534718 214436 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19888-2244/kubeconfig
I1209 23:14:19.535292 214436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-2244/kubeconfig: {Name:mke0607d72baeb496e6e8b72464517e7e676b09e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 23:14:19.535471 214436 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1209 23:14:19.535795 214436 config.go:182] Loaded profile config "old-k8s-version-098617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1209 23:14:19.535961 214436 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1209 23:14:19.536118 214436 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-098617"
I1209 23:14:19.536150 214436 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-098617"
W1209 23:14:19.536245 214436 addons.go:243] addon storage-provisioner should already be in state true
I1209 23:14:19.536284 214436 host.go:66] Checking if "old-k8s-version-098617" exists ...
I1209 23:14:19.536172 214436 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-098617"
I1209 23:14:19.536398 214436 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-098617"
I1209 23:14:19.536676 214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
I1209 23:14:19.536179 214436 addons.go:69] Setting dashboard=true in profile "old-k8s-version-098617"
I1209 23:14:19.537593 214436 addons.go:234] Setting addon dashboard=true in "old-k8s-version-098617"
W1209 23:14:19.537603 214436 addons.go:243] addon dashboard should already be in state true
I1209 23:14:19.537627 214436 host.go:66] Checking if "old-k8s-version-098617" exists ...
I1209 23:14:19.538097 214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
I1209 23:14:19.541465 214436 out.go:177] * Verifying Kubernetes components...
I1209 23:14:19.536204 214436 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-098617"
I1209 23:14:19.541795 214436 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-098617"
W1209 23:14:19.541828 214436 addons.go:243] addon metrics-server should already be in state true
I1209 23:14:19.541879 214436 host.go:66] Checking if "old-k8s-version-098617" exists ...
I1209 23:14:19.542310 214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
I1209 23:14:19.542505 214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
I1209 23:14:19.546834 214436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 23:14:19.595603 214436 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-098617"
W1209 23:14:19.595626 214436 addons.go:243] addon default-storageclass should already be in state true
I1209 23:14:19.595653 214436 host.go:66] Checking if "old-k8s-version-098617" exists ...
I1209 23:14:19.596040 214436 cli_runner.go:164] Run: docker container inspect old-k8s-version-098617 --format={{.State.Status}}
I1209 23:14:19.609310 214436 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1209 23:14:19.612224 214436 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1209 23:14:19.612258 214436 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1209 23:14:19.612329 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:19.627957 214436 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1209 23:14:19.629236 214436 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1209 23:14:19.637559 214436 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1209 23:14:19.637663 214436 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1209 23:14:19.637684 214436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1209 23:14:19.637755 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:19.644194 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1209 23:14:19.644251 214436 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1209 23:14:19.644344 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:19.680697 214436 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1209 23:14:19.680718 214436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1209 23:14:19.680778 214436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098617
I1209 23:14:19.691444 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:19.714339 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:19.717394 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:19.743286 214436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19888-2244/.minikube/machines/old-k8s-version-098617/id_rsa Username:docker}
I1209 23:14:19.807386 214436 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 23:14:19.872774 214436 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-098617" to be "Ready" ...
I1209 23:14:19.890879 214436 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1209 23:14:19.890903 214436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1209 23:14:19.926556 214436 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1209 23:14:19.926577 214436 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1209 23:14:19.965972 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1209 23:14:19.973581 214436 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1209 23:14:19.973652 214436 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1209 23:14:19.985040 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1209 23:14:19.985118 214436 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1209 23:14:19.992759 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 23:14:20.053067 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1209 23:14:20.053230 214436 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1209 23:14:20.058520 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 23:14:20.102351 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1209 23:14:20.102441 214436 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1209 23:14:20.180760 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1209 23:14:20.180834 214436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1209 23:14:20.298512 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1209 23:14:20.298589 214436 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W1209 23:14:20.318058 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.318153 214436 retry.go:31] will retry after 368.964975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:20.318218 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.318251 214436 retry.go:31] will retry after 182.65675ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:20.334856 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.334938 214436 retry.go:31] will retry after 354.416897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.346439 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1209 23:14:20.346512 214436 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1209 23:14:20.367645 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1209 23:14:20.367723 214436 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1209 23:14:20.386968 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1209 23:14:20.387041 214436 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1209 23:14:20.405663 214436 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1209 23:14:20.405737 214436 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1209 23:14:20.424580 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 23:14:20.501836 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1209 23:14:20.541340 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.541442 214436 retry.go:31] will retry after 245.971621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:20.623200 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.623287 214436 retry.go:31] will retry after 320.731058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.687552 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 23:14:20.689896 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 23:14:20.788199 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 23:14:20.860958 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.861126 214436 retry.go:31] will retry after 207.262482ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:20.861075 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.861186 214436 retry.go:31] will retry after 245.686325ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:20.939480 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.939582 214436 retry.go:31] will retry after 337.234046ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:20.944835 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1209 23:14:21.041641 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.041719 214436 retry.go:31] will retry after 480.106769ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.068934 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 23:14:21.107416 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 23:14:21.175048 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.175081 214436 retry.go:31] will retry after 761.011958ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:21.270567 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.270602 214436 retry.go:31] will retry after 698.060617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.277899 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 23:14:21.367222 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.367257 214436 retry.go:31] will retry after 446.449156ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.522334 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1209 23:14:21.619084 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.619116 214436 retry.go:31] will retry after 430.642974ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.814387 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 23:14:21.874049 214436 node_ready.go:53] error getting node "old-k8s-version-098617": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-098617": dial tcp 192.168.76.2:8443: connect: connection refused
W1209 23:14:21.899026 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.899068 214436 retry.go:31] will retry after 485.374595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:21.936282 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 23:14:21.969507 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 23:14:22.050786 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1209 23:14:22.067895 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:22.067928 214436 retry.go:31] will retry after 917.617953ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:22.135708 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:22.135742 214436 retry.go:31] will retry after 680.996206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:22.180525 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:22.180558 214436 retry.go:31] will retry after 1.624197287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:22.385524 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 23:14:22.479403 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:22.479436 214436 retry.go:31] will retry after 1.355295288s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:22.817659 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 23:14:22.908908 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:22.908989 214436 retry.go:31] will retry after 691.827429ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:22.986375 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1209 23:14:23.068432 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:23.068506 214436 retry.go:31] will retry after 1.571617906s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:23.601528 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 23:14:23.711313 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:23.711349 214436 retry.go:31] will retry after 2.560422129s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:23.805630 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1209 23:14:23.835010 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 23:14:23.922543 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:23.922579 214436 retry.go:31] will retry after 1.632331853s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:23.962038 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:23.962074 214436 retry.go:31] will retry after 1.584214603s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:24.373654 214436 node_ready.go:53] error getting node "old-k8s-version-098617": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-098617": dial tcp 192.168.76.2:8443: connect: connection refused
I1209 23:14:24.640407 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1209 23:14:24.739085 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:24.739116 214436 retry.go:31] will retry after 1.912935358s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:25.548183 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 23:14:25.555017 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1209 23:14:25.791670 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:25.791699 214436 retry.go:31] will retry after 3.115417501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 23:14:25.861829 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:25.861860 214436 retry.go:31] will retry after 3.058376551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:26.272603 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 23:14:26.457643 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:26.457670 214436 retry.go:31] will retry after 3.34728315s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:26.653032 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1209 23:14:26.754662 214436 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:26.754697 214436 retry.go:31] will retry after 2.297633304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 23:14:26.874171 214436 node_ready.go:53] error getting node "old-k8s-version-098617": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-098617": dial tcp 192.168.76.2:8443: connect: connection refused
I1209 23:14:28.908250 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 23:14:28.920581 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1209 23:14:29.053004 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 23:14:29.805946 214436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 23:14:39.672129 214436 node_ready.go:49] node "old-k8s-version-098617" has status "Ready":"True"
I1209 23:14:39.672159 214436 node_ready.go:38] duration metric: took 19.799350354s for node "old-k8s-version-098617" to be "Ready" ...
I1209 23:14:39.672170 214436 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1209 23:14:40.109927 214436 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-tz959" in "kube-system" namespace to be "Ready" ...
I1209 23:14:40.540131 214436 pod_ready.go:93] pod "coredns-74ff55c5b-tz959" in "kube-system" namespace has status "Ready":"True"
I1209 23:14:40.540164 214436 pod_ready.go:82] duration metric: took 430.192557ms for pod "coredns-74ff55c5b-tz959" in "kube-system" namespace to be "Ready" ...
I1209 23:14:40.540175 214436 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
I1209 23:14:40.625483 214436 pod_ready.go:93] pod "etcd-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"True"
I1209 23:14:40.625508 214436 pod_ready.go:82] duration metric: took 85.324741ms for pod "etcd-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
I1209 23:14:40.625522 214436 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
I1209 23:14:42.663549 214436 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:14:44.344465 214436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (15.423842724s)
I1209 23:14:44.344719 214436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (15.43643118s)
I1209 23:14:44.344841 214436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.291810421s)
I1209 23:14:44.344924 214436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (14.538951397s)
I1209 23:14:44.344943 214436 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-098617"
I1209 23:14:44.346533 214436 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-098617 addons enable metrics-server
I1209 23:14:44.356434 214436 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I1209 23:14:44.357746 214436 addons.go:510] duration metric: took 24.821782147s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I1209 23:14:45.134536 214436 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:14:47.141549 214436 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:14:48.132334 214436 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"True"
I1209 23:14:48.132358 214436 pod_ready.go:82] duration metric: took 7.506827533s for pod "kube-apiserver-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
I1209 23:14:48.132369 214436 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
I1209 23:14:50.139639 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:14:52.639558 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:14:55.139828 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:14:57.639611 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:00.171421 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:02.639287 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:04.640717 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:07.139672 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:09.139967 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:11.638966 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:13.640854 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:16.139178 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:18.139221 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:20.140329 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:22.638415 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:24.639115 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:27.138364 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:29.152113 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:31.638753 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:33.639162 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:36.138607 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:38.139505 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:40.150739 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:42.640430 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:45.139865 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:47.639320 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:50.142696 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:52.644365 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:55.140057 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:57.140353 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:15:59.638689 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:01.639005 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:03.640451 214436 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:05.638852 214436 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"True"
I1209 23:16:05.638877 214436 pod_ready.go:82] duration metric: took 1m17.506500215s for pod "kube-controller-manager-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
I1209 23:16:05.638889 214436 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d8xtk" in "kube-system" namespace to be "Ready" ...
I1209 23:16:05.644829 214436 pod_ready.go:93] pod "kube-proxy-d8xtk" in "kube-system" namespace has status "Ready":"True"
I1209 23:16:05.644862 214436 pod_ready.go:82] duration metric: took 5.964955ms for pod "kube-proxy-d8xtk" in "kube-system" namespace to be "Ready" ...
I1209 23:16:05.644874 214436 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
I1209 23:16:05.657954 214436 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-098617" in "kube-system" namespace has status "Ready":"True"
I1209 23:16:05.657984 214436 pod_ready.go:82] duration metric: took 13.100835ms for pod "kube-scheduler-old-k8s-version-098617" in "kube-system" namespace to be "Ready" ...
I1209 23:16:05.657996 214436 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace to be "Ready" ...
I1209 23:16:07.664327 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:09.664535 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:11.664874 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:14.164375 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:16.164554 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:18.165003 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:20.664505 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:23.166856 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:25.663280 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:27.665953 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:30.165014 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:32.664603 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:35.163546 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:37.164621 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:39.664548 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:41.666072 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:44.164556 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:46.664774 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:49.164694 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:51.664221 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:54.164755 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:56.664332 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:16:58.665784 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:01.170559 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:03.664373 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:05.664755 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:08.177341 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:10.664387 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:12.665123 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:15.165255 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:17.665499 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:20.164857 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:22.664577 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:25.164453 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:27.164567 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:29.165095 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:31.237487 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:33.665522 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:36.164435 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:38.164666 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:40.663796 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:43.165635 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:45.169433 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:47.665053 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:50.164542 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:52.165095 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:54.667298 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:57.164432 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:17:59.164646 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:01.165481 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:03.669312 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:06.164905 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:08.663983 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:10.664599 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:13.164801 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:15.164914 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:17.165117 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:19.664270 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:21.665473 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:24.164825 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:26.664623 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:29.164503 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:31.164878 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:33.664476 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:35.664757 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:38.165193 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:40.166450 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:42.664073 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:45.165320 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:47.664230 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:49.664901 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:52.176681 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:54.663964 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:57.165662 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:18:59.664229 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:01.664972 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:03.667495 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:06.165347 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:08.663798 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:10.664539 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:13.164907 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:15.665047 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:18.164405 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:20.165209 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:22.225580 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:24.664588 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:27.164381 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:29.165815 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:31.664226 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:34.165399 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:36.664664 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:39.164359 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:41.664613 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:43.664834 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:46.165624 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:48.663844 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:50.664390 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:52.665353 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:55.164652 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:57.165671 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:19:59.664078 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:20:01.665558 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:20:04.163974 214436 pod_ready.go:103] pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace has status "Ready":"False"
I1209 23:20:05.664675 214436 pod_ready.go:82] duration metric: took 4m0.006664918s for pod "metrics-server-9975d5f86-4rw7k" in "kube-system" namespace to be "Ready" ...
E1209 23:20:05.664698 214436 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1209 23:20:05.664709 214436 pod_ready.go:39] duration metric: took 5m25.992527502s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1209 23:20:05.664724 214436 api_server.go:52] waiting for apiserver process to appear ...
I1209 23:20:05.664755 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1209 23:20:05.664809 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1209 23:20:05.716330 214436 cri.go:89] found id: "9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df"
I1209 23:20:05.716349 214436 cri.go:89] found id: "6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
I1209 23:20:05.716356 214436 cri.go:89] found id: ""
I1209 23:20:05.716364 214436 logs.go:282] 2 containers: [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df 6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6]
I1209 23:20:05.716416 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:05.720613 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:05.724904 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1209 23:20:05.724971 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1209 23:20:05.794870 214436 cri.go:89] found id: "55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd"
I1209 23:20:05.794890 214436 cri.go:89] found id: "063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
I1209 23:20:05.794895 214436 cri.go:89] found id: ""
I1209 23:20:05.794903 214436 logs.go:282] 2 containers: [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd 063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c]
I1209 23:20:05.795013 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:05.798664 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:05.806904 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1209 23:20:05.806990 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1209 23:20:05.883953 214436 cri.go:89] found id: "dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b"
I1209 23:20:05.883974 214436 cri.go:89] found id: "99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
I1209 23:20:05.883979 214436 cri.go:89] found id: ""
I1209 23:20:05.883986 214436 logs.go:282] 2 containers: [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b 99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650]
I1209 23:20:05.884039 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:05.888212 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:05.892459 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1209 23:20:05.892527 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1209 23:20:05.952763 214436 cri.go:89] found id: "e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e"
I1209 23:20:05.952780 214436 cri.go:89] found id: "5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
I1209 23:20:05.952785 214436 cri.go:89] found id: ""
I1209 23:20:05.952792 214436 logs.go:282] 2 containers: [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e 5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19]
I1209 23:20:05.952849 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:05.957287 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:05.961182 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1209 23:20:05.961248 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1209 23:20:06.019302 214436 cri.go:89] found id: "3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50"
I1209 23:20:06.019321 214436 cri.go:89] found id: "a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
I1209 23:20:06.019325 214436 cri.go:89] found id: ""
I1209 23:20:06.019332 214436 logs.go:282] 2 containers: [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50 a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad]
I1209 23:20:06.019393 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.024900 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.030763 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1209 23:20:06.030843 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1209 23:20:06.123188 214436 cri.go:89] found id: "10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4"
I1209 23:20:06.123204 214436 cri.go:89] found id: "f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
I1209 23:20:06.123209 214436 cri.go:89] found id: ""
I1209 23:20:06.123215 214436 logs.go:282] 2 containers: [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4 f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442]
I1209 23:20:06.123270 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.138620 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.145339 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1209 23:20:06.145447 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1209 23:20:06.212909 214436 cri.go:89] found id: "394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee"
I1209 23:20:06.212943 214436 cri.go:89] found id: "5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
I1209 23:20:06.212953 214436 cri.go:89] found id: ""
I1209 23:20:06.212961 214436 logs.go:282] 2 containers: [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee 5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38]
I1209 23:20:06.213032 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.218135 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.222452 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1209 23:20:06.222542 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1209 23:20:06.274060 214436 cri.go:89] found id: "f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2"
I1209 23:20:06.274088 214436 cri.go:89] found id: ""
I1209 23:20:06.274096 214436 logs.go:282] 1 containers: [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2]
I1209 23:20:06.274150 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.278975 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1209 23:20:06.279049 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1209 23:20:06.347374 214436 cri.go:89] found id: "5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa"
I1209 23:20:06.347400 214436 cri.go:89] found id: "9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d"
I1209 23:20:06.347405 214436 cri.go:89] found id: ""
I1209 23:20:06.347413 214436 logs.go:282] 2 containers: [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa 9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d]
I1209 23:20:06.347508 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.351947 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:06.356871 214436 logs.go:123] Gathering logs for kube-controller-manager [f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442] ...
I1209 23:20:06.356903 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
I1209 23:20:06.435020 214436 logs.go:123] Gathering logs for container status ...
I1209 23:20:06.435060 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1209 23:20:06.509695 214436 logs.go:123] Gathering logs for dmesg ...
I1209 23:20:06.509772 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1209 23:20:06.529377 214436 logs.go:123] Gathering logs for kube-proxy [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50] ...
I1209 23:20:06.529453 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50"
I1209 23:20:06.604733 214436 logs.go:123] Gathering logs for coredns [99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650] ...
I1209 23:20:06.604761 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
I1209 23:20:06.657707 214436 logs.go:123] Gathering logs for kube-scheduler [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e] ...
I1209 23:20:06.657734 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e"
I1209 23:20:06.726130 214436 logs.go:123] Gathering logs for kube-scheduler [5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19] ...
I1209 23:20:06.726159 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
I1209 23:20:06.781721 214436 logs.go:123] Gathering logs for kindnet [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee] ...
I1209 23:20:06.781750 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee"
I1209 23:20:06.847695 214436 logs.go:123] Gathering logs for storage-provisioner [9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d] ...
I1209 23:20:06.847726 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d"
I1209 23:20:06.894335 214436 logs.go:123] Gathering logs for containerd ...
I1209 23:20:06.894359 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1209 23:20:06.977797 214436 logs.go:123] Gathering logs for kubelet ...
I1209 23:20:06.977838 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1209 23:20:07.057997 214436 logs.go:138] Found kubelet problem: Dec 09 23:14:42 old-k8s-version-098617 kubelet[661]: E1209 23:14:42.573159 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:07.058238 214436 logs.go:138] Found kubelet problem: Dec 09 23:14:43 old-k8s-version-098617 kubelet[661]: E1209 23:14:43.065876 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.060674 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:03 old-k8s-version-098617 kubelet[661]: E1209 23:15:03.237918 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.061175 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:04 old-k8s-version-098617 kubelet[661]: E1209 23:15:04.242454 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.063972 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:05 old-k8s-version-098617 kubelet[661]: E1209 23:15:05.924038 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:07.064687 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:09 old-k8s-version-098617 kubelet[661]: E1209 23:15:09.067006 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.065159 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:14 old-k8s-version-098617 kubelet[661]: E1209 23:15:14.272914 661 pod_workers.go:191] Error syncing pod 5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c ("storage-provisioner_kube-system(5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c)"
W1209 23:20:07.065395 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:17 old-k8s-version-098617 kubelet[661]: E1209 23:15:17.595843 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.066323 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:20 old-k8s-version-098617 kubelet[661]: E1209 23:15:20.301079 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.066667 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:29 old-k8s-version-098617 kubelet[661]: E1209 23:15:29.067084 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.069859 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:30 old-k8s-version-098617 kubelet[661]: E1209 23:15:30.605005 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:07.070661 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:44 old-k8s-version-098617 kubelet[661]: E1209 23:15:44.374051 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.070901 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:44 old-k8s-version-098617 kubelet[661]: E1209 23:15:44.595088 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.071303 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:49 old-k8s-version-098617 kubelet[661]: E1209 23:15:49.067450 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.071533 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:55 old-k8s-version-098617 kubelet[661]: E1209 23:15:55.596194 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.071937 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:00 old-k8s-version-098617 kubelet[661]: E1209 23:16:00.594905 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.072160 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:09 old-k8s-version-098617 kubelet[661]: E1209 23:16:09.595293 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.072591 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:12 old-k8s-version-098617 kubelet[661]: E1209 23:16:12.595368 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.075234 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:24 old-k8s-version-098617 kubelet[661]: E1209 23:16:24.603234 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:07.075861 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:28 old-k8s-version-098617 kubelet[661]: E1209 23:16:28.526789 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.076267 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:29 old-k8s-version-098617 kubelet[661]: E1209 23:16:29.529315 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.076475 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:36 old-k8s-version-098617 kubelet[661]: E1209 23:16:36.595249 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.076956 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:40 old-k8s-version-098617 kubelet[661]: E1209 23:16:40.594761 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.077146 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:49 old-k8s-version-098617 kubelet[661]: E1209 23:16:49.599764 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.077470 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:55 old-k8s-version-098617 kubelet[661]: E1209 23:16:55.595001 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.077652 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:02 old-k8s-version-098617 kubelet[661]: E1209 23:17:02.595200 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.077975 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:09 old-k8s-version-098617 kubelet[661]: E1209 23:17:09.594689 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.078198 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:16 old-k8s-version-098617 kubelet[661]: E1209 23:17:16.595250 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.078594 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:22 old-k8s-version-098617 kubelet[661]: E1209 23:17:22.594748 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.078921 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:27 old-k8s-version-098617 kubelet[661]: E1209 23:17:27.596051 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.079296 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:36 old-k8s-version-098617 kubelet[661]: E1209 23:17:36.594954 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.079540 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:39 old-k8s-version-098617 kubelet[661]: E1209 23:17:39.595361 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.079899 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:47 old-k8s-version-098617 kubelet[661]: E1209 23:17:47.595830 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.082430 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:53 old-k8s-version-098617 kubelet[661]: E1209 23:17:53.605847 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:07.083089 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:58 old-k8s-version-098617 kubelet[661]: E1209 23:17:58.790114 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.083442 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:59 old-k8s-version-098617 kubelet[661]: E1209 23:17:59.799377 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.083650 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:05 old-k8s-version-098617 kubelet[661]: E1209 23:18:05.595490 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.084004 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:12 old-k8s-version-098617 kubelet[661]: E1209 23:18:12.594695 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.084210 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:16 old-k8s-version-098617 kubelet[661]: E1209 23:18:16.595137 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.084568 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:24 old-k8s-version-098617 kubelet[661]: E1209 23:18:24.595133 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.084772 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:30 old-k8s-version-098617 kubelet[661]: E1209 23:18:30.595155 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.085121 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:38 old-k8s-version-098617 kubelet[661]: E1209 23:18:38.594827 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.085329 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:45 old-k8s-version-098617 kubelet[661]: E1209 23:18:45.599645 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.085679 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:52 old-k8s-version-098617 kubelet[661]: E1209 23:18:52.594642 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.085883 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:56 old-k8s-version-098617 kubelet[661]: E1209 23:18:56.595179 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.086254 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:05 old-k8s-version-098617 kubelet[661]: E1209 23:19:05.595377 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.086460 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:09 old-k8s-version-098617 kubelet[661]: E1209 23:19:09.601501 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.086828 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:17 old-k8s-version-098617 kubelet[661]: E1209 23:19:17.599504 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.087033 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:23 old-k8s-version-098617 kubelet[661]: E1209 23:19:23.595388 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.087454 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:28 old-k8s-version-098617 kubelet[661]: E1209 23:19:28.594771 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.087676 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.088025 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.088229 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.088583 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.088793 214436 logs.go:138] Found kubelet problem: Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1209 23:20:07.088807 214436 logs.go:123] Gathering logs for describe nodes ...
I1209 23:20:07.088833 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1209 23:20:07.280596 214436 logs.go:123] Gathering logs for kube-controller-manager [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4] ...
I1209 23:20:07.280630 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4"
I1209 23:20:07.347243 214436 logs.go:123] Gathering logs for kindnet [5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38] ...
I1209 23:20:07.347276 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
I1209 23:20:07.392055 214436 logs.go:123] Gathering logs for kube-apiserver [6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6] ...
I1209 23:20:07.392083 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
I1209 23:20:07.466330 214436 logs.go:123] Gathering logs for etcd [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd] ...
I1209 23:20:07.466359 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd"
I1209 23:20:07.519621 214436 logs.go:123] Gathering logs for coredns [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b] ...
I1209 23:20:07.519654 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b"
I1209 23:20:07.568042 214436 logs.go:123] Gathering logs for kube-proxy [a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad] ...
I1209 23:20:07.568069 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
I1209 23:20:07.624539 214436 logs.go:123] Gathering logs for kubernetes-dashboard [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2] ...
I1209 23:20:07.624622 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2"
I1209 23:20:07.667855 214436 logs.go:123] Gathering logs for storage-provisioner [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa] ...
I1209 23:20:07.667933 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa"
I1209 23:20:07.707464 214436 logs.go:123] Gathering logs for kube-apiserver [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df] ...
I1209 23:20:07.707545 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df"
I1209 23:20:07.767587 214436 logs.go:123] Gathering logs for etcd [063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c] ...
I1209 23:20:07.767619 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
I1209 23:20:07.815180 214436 out.go:358] Setting ErrFile to fd 2...
I1209 23:20:07.815206 214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1209 23:20:07.815278 214436 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1209 23:20:07.815294 214436 out.go:270] Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.815305 214436 out.go:270] Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.815314 214436 out.go:270] Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:07.815321 214436 out.go:270] Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:07.815328 214436 out.go:270] Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1209 23:20:07.815335 214436 out.go:358] Setting ErrFile to fd 2...
I1209 23:20:07.815343 214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:20:17.816817 214436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 23:20:17.829175 214436 api_server.go:72] duration metric: took 5m58.293675473s to wait for apiserver process to appear ...
I1209 23:20:17.829198 214436 api_server.go:88] waiting for apiserver healthz status ...
I1209 23:20:17.829236 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1209 23:20:17.829298 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1209 23:20:17.876769 214436 cri.go:89] found id: "9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df"
I1209 23:20:17.876789 214436 cri.go:89] found id: "6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
I1209 23:20:17.876794 214436 cri.go:89] found id: ""
I1209 23:20:17.876802 214436 logs.go:282] 2 containers: [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df 6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6]
I1209 23:20:17.876858 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:17.881224 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:17.885015 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1209 23:20:17.885093 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1209 23:20:17.929102 214436 cri.go:89] found id: "55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd"
I1209 23:20:17.929125 214436 cri.go:89] found id: "063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
I1209 23:20:17.929131 214436 cri.go:89] found id: ""
I1209 23:20:17.929139 214436 logs.go:282] 2 containers: [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd 063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c]
I1209 23:20:17.929197 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:17.933926 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:17.938094 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1209 23:20:17.938162 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1209 23:20:17.977492 214436 cri.go:89] found id: "dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b"
I1209 23:20:17.977522 214436 cri.go:89] found id: "99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
I1209 23:20:17.977527 214436 cri.go:89] found id: ""
I1209 23:20:17.977534 214436 logs.go:282] 2 containers: [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b 99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650]
I1209 23:20:17.977590 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:17.981288 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:17.985261 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1209 23:20:17.985330 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1209 23:20:18.029284 214436 cri.go:89] found id: "e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e"
I1209 23:20:18.029319 214436 cri.go:89] found id: "5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
I1209 23:20:18.029325 214436 cri.go:89] found id: ""
I1209 23:20:18.029332 214436 logs.go:282] 2 containers: [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e 5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19]
I1209 23:20:18.029417 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.033631 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.037655 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1209 23:20:18.037755 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1209 23:20:18.081360 214436 cri.go:89] found id: "3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50"
I1209 23:20:18.081380 214436 cri.go:89] found id: "a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
I1209 23:20:18.081385 214436 cri.go:89] found id: ""
I1209 23:20:18.081392 214436 logs.go:282] 2 containers: [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50 a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad]
I1209 23:20:18.081481 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.085180 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.089322 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1209 23:20:18.089409 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1209 23:20:18.134668 214436 cri.go:89] found id: "10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4"
I1209 23:20:18.134696 214436 cri.go:89] found id: "f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
I1209 23:20:18.134742 214436 cri.go:89] found id: ""
I1209 23:20:18.134755 214436 logs.go:282] 2 containers: [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4 f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442]
I1209 23:20:18.134813 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.138670 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.142891 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1209 23:20:18.142966 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1209 23:20:18.186936 214436 cri.go:89] found id: "394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee"
I1209 23:20:18.186957 214436 cri.go:89] found id: "5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
I1209 23:20:18.186962 214436 cri.go:89] found id: ""
I1209 23:20:18.186970 214436 logs.go:282] 2 containers: [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee 5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38]
I1209 23:20:18.187033 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.190761 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.194279 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1209 23:20:18.194347 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1209 23:20:18.241192 214436 cri.go:89] found id: "f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2"
I1209 23:20:18.241225 214436 cri.go:89] found id: ""
I1209 23:20:18.241234 214436 logs.go:282] 1 containers: [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2]
I1209 23:20:18.241294 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.245026 214436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1209 23:20:18.245114 214436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1209 23:20:18.288409 214436 cri.go:89] found id: "5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa"
I1209 23:20:18.288432 214436 cri.go:89] found id: "9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d"
I1209 23:20:18.288437 214436 cri.go:89] found id: ""
I1209 23:20:18.288444 214436 logs.go:282] 2 containers: [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa 9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d]
I1209 23:20:18.288512 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.292520 214436 ssh_runner.go:195] Run: which crictl
I1209 23:20:18.296226 214436 logs.go:123] Gathering logs for coredns [99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650] ...
I1209 23:20:18.296262 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650"
I1209 23:20:18.341368 214436 logs.go:123] Gathering logs for kube-proxy [a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad] ...
I1209 23:20:18.341398 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad"
I1209 23:20:18.382245 214436 logs.go:123] Gathering logs for kube-controller-manager [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4] ...
I1209 23:20:18.382273 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4"
I1209 23:20:18.439435 214436 logs.go:123] Gathering logs for kindnet [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee] ...
I1209 23:20:18.439469 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee"
I1209 23:20:18.486544 214436 logs.go:123] Gathering logs for kindnet [5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38] ...
I1209 23:20:18.486572 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38"
I1209 23:20:18.534401 214436 logs.go:123] Gathering logs for kubernetes-dashboard [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2] ...
I1209 23:20:18.534429 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2"
I1209 23:20:18.578254 214436 logs.go:123] Gathering logs for kubelet ...
I1209 23:20:18.578286 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1209 23:20:18.641622 214436 logs.go:138] Found kubelet problem: Dec 09 23:14:42 old-k8s-version-098617 kubelet[661]: E1209 23:14:42.573159 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:18.641827 214436 logs.go:138] Found kubelet problem: Dec 09 23:14:43 old-k8s-version-098617 kubelet[661]: E1209 23:14:43.065876 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.644121 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:03 old-k8s-version-098617 kubelet[661]: E1209 23:15:03.237918 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.644585 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:04 old-k8s-version-098617 kubelet[661]: E1209 23:15:04.242454 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.647131 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:05 old-k8s-version-098617 kubelet[661]: E1209 23:15:05.924038 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:18.647801 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:09 old-k8s-version-098617 kubelet[661]: E1209 23:15:09.067006 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.648277 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:14 old-k8s-version-098617 kubelet[661]: E1209 23:15:14.272914 661 pod_workers.go:191] Error syncing pod 5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c ("storage-provisioner_kube-system(5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d412ff6-54cf-4dde-b94b-f7cbb9a3a54c)"
W1209 23:20:18.648465 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:17 old-k8s-version-098617 kubelet[661]: E1209 23:15:17.595843 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.649467 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:20 old-k8s-version-098617 kubelet[661]: E1209 23:15:20.301079 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.649799 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:29 old-k8s-version-098617 kubelet[661]: E1209 23:15:29.067084 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.652410 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:30 old-k8s-version-098617 kubelet[661]: E1209 23:15:30.605005 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:18.653183 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:44 old-k8s-version-098617 kubelet[661]: E1209 23:15:44.374051 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.653375 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:44 old-k8s-version-098617 kubelet[661]: E1209 23:15:44.595088 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.653709 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:49 old-k8s-version-098617 kubelet[661]: E1209 23:15:49.067450 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.653897 214436 logs.go:138] Found kubelet problem: Dec 09 23:15:55 old-k8s-version-098617 kubelet[661]: E1209 23:15:55.596194 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.654224 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:00 old-k8s-version-098617 kubelet[661]: E1209 23:16:00.594905 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.654408 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:09 old-k8s-version-098617 kubelet[661]: E1209 23:16:09.595293 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.654770 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:12 old-k8s-version-098617 kubelet[661]: E1209 23:16:12.595368 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.657216 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:24 old-k8s-version-098617 kubelet[661]: E1209 23:16:24.603234 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:18.657805 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:28 old-k8s-version-098617 kubelet[661]: E1209 23:16:28.526789 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.658134 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:29 old-k8s-version-098617 kubelet[661]: E1209 23:16:29.529315 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.658317 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:36 old-k8s-version-098617 kubelet[661]: E1209 23:16:36.595249 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.658658 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:40 old-k8s-version-098617 kubelet[661]: E1209 23:16:40.594761 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.658854 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:49 old-k8s-version-098617 kubelet[661]: E1209 23:16:49.599764 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.659185 214436 logs.go:138] Found kubelet problem: Dec 09 23:16:55 old-k8s-version-098617 kubelet[661]: E1209 23:16:55.595001 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.659368 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:02 old-k8s-version-098617 kubelet[661]: E1209 23:17:02.595200 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.659698 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:09 old-k8s-version-098617 kubelet[661]: E1209 23:17:09.594689 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.659882 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:16 old-k8s-version-098617 kubelet[661]: E1209 23:17:16.595250 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.660209 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:22 old-k8s-version-098617 kubelet[661]: E1209 23:17:22.594748 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.660394 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:27 old-k8s-version-098617 kubelet[661]: E1209 23:17:27.596051 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.660719 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:36 old-k8s-version-098617 kubelet[661]: E1209 23:17:36.594954 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.660901 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:39 old-k8s-version-098617 kubelet[661]: E1209 23:17:39.595361 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.661251 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:47 old-k8s-version-098617 kubelet[661]: E1209 23:17:47.595830 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.663825 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:53 old-k8s-version-098617 kubelet[661]: E1209 23:17:53.605847 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1209 23:20:18.664420 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:58 old-k8s-version-098617 kubelet[661]: E1209 23:17:58.790114 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.664751 214436 logs.go:138] Found kubelet problem: Dec 09 23:17:59 old-k8s-version-098617 kubelet[661]: E1209 23:17:59.799377 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.664937 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:05 old-k8s-version-098617 kubelet[661]: E1209 23:18:05.595490 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.665261 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:12 old-k8s-version-098617 kubelet[661]: E1209 23:18:12.594695 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.665446 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:16 old-k8s-version-098617 kubelet[661]: E1209 23:18:16.595137 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.665770 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:24 old-k8s-version-098617 kubelet[661]: E1209 23:18:24.595133 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.665954 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:30 old-k8s-version-098617 kubelet[661]: E1209 23:18:30.595155 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.666280 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:38 old-k8s-version-098617 kubelet[661]: E1209 23:18:38.594827 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.666467 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:45 old-k8s-version-098617 kubelet[661]: E1209 23:18:45.599645 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.666860 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:52 old-k8s-version-098617 kubelet[661]: E1209 23:18:52.594642 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.667045 214436 logs.go:138] Found kubelet problem: Dec 09 23:18:56 old-k8s-version-098617 kubelet[661]: E1209 23:18:56.595179 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.667374 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:05 old-k8s-version-098617 kubelet[661]: E1209 23:19:05.595377 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.667717 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:09 old-k8s-version-098617 kubelet[661]: E1209 23:19:09.601501 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.668122 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:17 old-k8s-version-098617 kubelet[661]: E1209 23:19:17.599504 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.668344 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:23 old-k8s-version-098617 kubelet[661]: E1209 23:19:23.595388 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.668726 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:28 old-k8s-version-098617 kubelet[661]: E1209 23:19:28.594771 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.668945 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.669302 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.669492 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.669823 214436 logs.go:138] Found kubelet problem: Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.670007 214436 logs.go:138] Found kubelet problem: Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:18.670332 214436 logs.go:138] Found kubelet problem: Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: E1209 23:20:09.599956 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:18.670536 214436 logs.go:138] Found kubelet problem: Dec 09 23:20:12 old-k8s-version-098617 kubelet[661]: E1209 23:20:12.596822 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1209 23:20:18.670551 214436 logs.go:123] Gathering logs for kube-apiserver [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df] ...
I1209 23:20:18.670566 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df"
I1209 23:20:18.726336 214436 logs.go:123] Gathering logs for coredns [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b] ...
I1209 23:20:18.726367 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b"
I1209 23:20:18.772894 214436 logs.go:123] Gathering logs for kube-proxy [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50] ...
I1209 23:20:18.772924 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50"
I1209 23:20:18.811216 214436 logs.go:123] Gathering logs for storage-provisioner [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa] ...
I1209 23:20:18.811248 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa"
I1209 23:20:18.863382 214436 logs.go:123] Gathering logs for dmesg ...
I1209 23:20:18.863410 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1209 23:20:18.878963 214436 logs.go:123] Gathering logs for kube-apiserver [6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6] ...
I1209 23:20:18.878998 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6"
I1209 23:20:18.955508 214436 logs.go:123] Gathering logs for kube-scheduler [5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19] ...
I1209 23:20:18.955543 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19"
I1209 23:20:18.996932 214436 logs.go:123] Gathering logs for kube-controller-manager [f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442] ...
I1209 23:20:18.996961 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442"
I1209 23:20:19.055585 214436 logs.go:123] Gathering logs for storage-provisioner [9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d] ...
I1209 23:20:19.055622 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d"
I1209 23:20:19.101889 214436 logs.go:123] Gathering logs for etcd [063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c] ...
I1209 23:20:19.101918 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c"
I1209 23:20:19.152002 214436 logs.go:123] Gathering logs for kube-scheduler [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e] ...
I1209 23:20:19.152031 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e"
I1209 23:20:19.191323 214436 logs.go:123] Gathering logs for containerd ...
I1209 23:20:19.191365 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1209 23:20:19.257897 214436 logs.go:123] Gathering logs for container status ...
I1209 23:20:19.257932 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1209 23:20:19.309335 214436 logs.go:123] Gathering logs for describe nodes ...
I1209 23:20:19.309363 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1209 23:20:19.459995 214436 logs.go:123] Gathering logs for etcd [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd] ...
I1209 23:20:19.460028 214436 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd"
I1209 23:20:19.508050 214436 out.go:358] Setting ErrFile to fd 2...
I1209 23:20:19.508080 214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1209 23:20:19.508160 214436 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1209 23:20:19.508177 214436 out.go:270] Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:19.508200 214436 out.go:270] Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:19.508214 214436 out.go:270] Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 23:20:19.508220 214436 out.go:270] Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: E1209 23:20:09.599956 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: E1209 23:20:09.599956 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
W1209 23:20:19.508239 214436 out.go:270] Dec 09 23:20:12 old-k8s-version-098617 kubelet[661]: E1209 23:20:12.596822 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:20:12 old-k8s-version-098617 kubelet[661]: E1209 23:20:12.596822 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1209 23:20:19.508273 214436 out.go:358] Setting ErrFile to fd 2...
I1209 23:20:19.508279 214436 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:20:29.510549 214436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1209 23:20:29.524882 214436 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1209 23:20:29.528146 214436 out.go:201]
W1209 23:20:29.530871 214436 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1209 23:20:29.530913 214436 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W1209 23:20:29.530934 214436 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W1209 23:20:29.530940 214436 out.go:270] *
*
W1209 23:20:29.531888 214436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 23:20:29.534511 214436 out.go:201]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-098617 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-098617
helpers_test.go:235: (dbg) docker inspect old-k8s-version-098617:
-- stdout --
[
{
"Id": "57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b",
"Created": "2024-12-09T23:11:15.751274578Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 214659,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-12-09T23:14:11.847036432Z",
"FinishedAt": "2024-12-09T23:14:10.741964012Z"
},
"Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
"ResolvConfPath": "/var/lib/docker/containers/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b/hostname",
"HostsPath": "/var/lib/docker/containers/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b/hosts",
"LogPath": "/var/lib/docker/containers/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b/57f412c304cdcbf8e565c298234fb1bde21c07bb763667ba3ada7b85e7c9515b-json.log",
"Name": "/old-k8s-version-098617",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-098617:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-098617",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/b29bcf2165f013f78da042422e709637844eaa3274ebb164f571f14a16d0892f-init/diff:/var/lib/docker/overlay2/6cfa97401e314435cf365c42eba2c46d097e4b7837b825b4a08546b8c35c8dc6/diff",
"MergedDir": "/var/lib/docker/overlay2/b29bcf2165f013f78da042422e709637844eaa3274ebb164f571f14a16d0892f/merged",
"UpperDir": "/var/lib/docker/overlay2/b29bcf2165f013f78da042422e709637844eaa3274ebb164f571f14a16d0892f/diff",
"WorkDir": "/var/lib/docker/overlay2/b29bcf2165f013f78da042422e709637844eaa3274ebb164f571f14a16d0892f/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-098617",
"Source": "/var/lib/docker/volumes/old-k8s-version-098617/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-098617",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-098617",
"name.minikube.sigs.k8s.io": "old-k8s-version-098617",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "f23de1d61de65bb9b88f9be54321ec1d0391ac841fd142a2f92b88d1e97aa40b",
"SandboxKey": "/var/run/docker/netns/f23de1d61de6",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33063"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33064"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33067"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33065"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33066"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-098617": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "bc0947319246c73a3e3ae762238cdf8952fd9005098fc7272274c70a84c92d4d",
"EndpointID": "07754624237b3db22b03fe89e5ec978786d0bc29936d95bf18fbfcfe8eab1e60",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-098617",
"57f412c304cd"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098617 -n old-k8s-version-098617
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-098617 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-098617 logs -n 25: (2.843181288s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-521962 | cert-expiration-521962 | jenkins | v1.34.0 | 09 Dec 24 23:10 UTC | 09 Dec 24 23:10 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-786239 | force-systemd-env-786239 | jenkins | v1.34.0 | 09 Dec 24 23:10 UTC | 09 Dec 24 23:10 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-786239 | force-systemd-env-786239 | jenkins | v1.34.0 | 09 Dec 24 23:10 UTC | 09 Dec 24 23:10 UTC |
| start | -p cert-options-171060 | cert-options-171060 | jenkins | v1.34.0 | 09 Dec 24 23:10 UTC | 09 Dec 24 23:11 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-171060 ssh | cert-options-171060 | jenkins | v1.34.0 | 09 Dec 24 23:11 UTC | 09 Dec 24 23:11 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-171060 -- sudo | cert-options-171060 | jenkins | v1.34.0 | 09 Dec 24 23:11 UTC | 09 Dec 24 23:11 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-171060 | cert-options-171060 | jenkins | v1.34.0 | 09 Dec 24 23:11 UTC | 09 Dec 24 23:11 UTC |
| start | -p old-k8s-version-098617 | old-k8s-version-098617 | jenkins | v1.34.0 | 09 Dec 24 23:11 UTC | 09 Dec 24 23:13 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-521962 | cert-expiration-521962 | jenkins | v1.34.0 | 09 Dec 24 23:13 UTC | 09 Dec 24 23:14 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| addons | enable metrics-server -p old-k8s-version-098617 | old-k8s-version-098617 | jenkins | v1.34.0 | 09 Dec 24 23:13 UTC | 09 Dec 24 23:13 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-098617 | old-k8s-version-098617 | jenkins | v1.34.0 | 09 Dec 24 23:13 UTC | 09 Dec 24 23:14 UTC |
| | --alsologtostderr -v=3 | | | | | |
| delete | -p cert-expiration-521962 | cert-expiration-521962 | jenkins | v1.34.0 | 09 Dec 24 23:14 UTC | 09 Dec 24 23:14 UTC |
| start | -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:14 UTC | 09 Dec 24 23:15 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| addons | enable dashboard -p old-k8s-version-098617 | old-k8s-version-098617 | jenkins | v1.34.0 | 09 Dec 24 23:14 UTC | 09 Dec 24 23:14 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-098617 | old-k8s-version-098617 | jenkins | v1.34.0 | 09 Dec 24 23:14 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:15 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:15 UTC | 09 Dec 24 23:20 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| image | no-preload-548785 image list | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
| delete | -p no-preload-548785 | no-preload-548785 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | 09 Dec 24 23:20 UTC |
| start | -p embed-certs-744076 | embed-certs-744076 | jenkins | v1.34.0 | 09 Dec 24 23:20 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/12/09 23:20:27
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.23.2 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1209 23:20:27.869402 226785 out.go:345] Setting OutFile to fd 1 ...
I1209 23:20:27.871169 226785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:20:27.871222 226785 out.go:358] Setting ErrFile to fd 2...
I1209 23:20:27.871245 226785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 23:20:27.871617 226785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19888-2244/.minikube/bin
I1209 23:20:27.872280 226785 out.go:352] Setting JSON to false
I1209 23:20:27.873364 226785 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3775,"bootTime":1733782653,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1209 23:20:27.873545 226785 start.go:139] virtualization:
I1209 23:20:27.875444 226785 out.go:177] * [embed-certs-744076] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1209 23:20:27.878885 226785 out.go:177] - MINIKUBE_LOCATION=19888
I1209 23:20:27.879016 226785 notify.go:220] Checking for updates...
I1209 23:20:27.881108 226785 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 23:20:27.882414 226785 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19888-2244/kubeconfig
I1209 23:20:27.884230 226785 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19888-2244/.minikube
I1209 23:20:27.887617 226785 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1209 23:20:27.889115 226785 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1209 23:20:27.890817 226785 config.go:182] Loaded profile config "old-k8s-version-098617": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1209 23:20:27.890913 226785 driver.go:394] Setting default libvirt URI to qemu:///system
I1209 23:20:27.937643 226785 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
I1209 23:20:27.937767 226785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 23:20:28.005266 226785 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:20:27.996099755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1209 23:20:28.005381 226785 docker.go:318] overlay module found
I1209 23:20:28.011614 226785 out.go:177] * Using the docker driver based on user configuration
I1209 23:20:28.012938 226785 start.go:297] selected driver: docker
I1209 23:20:28.012968 226785 start.go:901] validating driver "docker" against <nil>
I1209 23:20:28.012985 226785 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 23:20:28.013913 226785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 23:20:28.088716 226785 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 23:20:28.078901632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1209 23:20:28.088927 226785 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1209 23:20:28.089170 226785 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 23:20:28.090868 226785 out.go:177] * Using Docker driver with root privileges
I1209 23:20:28.092522 226785 cni.go:84] Creating CNI manager for ""
I1209 23:20:28.092597 226785 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 23:20:28.092611 226785 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I1209 23:20:28.092696 226785 start.go:340] cluster config:
{Name:embed-certs-744076 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-744076 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 23:20:28.094128 226785 out.go:177] * Starting "embed-certs-744076" primary control-plane node in "embed-certs-744076" cluster
I1209 23:20:28.095686 226785 cache.go:121] Beginning downloading kic base image for docker with containerd
I1209 23:20:28.097465 226785 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
I1209 23:20:28.098942 226785 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1209 23:20:28.099005 226785 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
I1209 23:20:28.099013 226785 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
I1209 23:20:28.099033 226785 cache.go:56] Caching tarball of preloaded images
I1209 23:20:28.099118 226785 preload.go:172] Found /home/jenkins/minikube-integration/19888-2244/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1209 23:20:28.099128 226785 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
I1209 23:20:28.099234 226785 profile.go:143] Saving config to /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/embed-certs-744076/config.json ...
I1209 23:20:28.099251 226785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19888-2244/.minikube/profiles/embed-certs-744076/config.json: {Name:mk841e42c3bc2d87c19bc50e9458d984fbc41d39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 23:20:28.120814 226785 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
I1209 23:20:28.120834 226785 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
I1209 23:20:28.120853 226785 cache.go:194] Successfully downloaded all kic artifacts
I1209 23:20:28.120883 226785 start.go:360] acquireMachinesLock for embed-certs-744076: {Name:mkca6141fc0cb8d284cb727d6174977d87cddf09 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 23:20:28.120993 226785 start.go:364] duration metric: took 86.604µs to acquireMachinesLock for "embed-certs-744076"
I1209 23:20:28.121020 226785 start.go:93] Provisioning new machine with config: &{Name:embed-certs-744076 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-744076 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1209 23:20:28.121175 226785 start.go:125] createHost starting for "" (driver="docker")
I1209 23:20:29.510549 214436 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1209 23:20:29.524882 214436 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1209 23:20:29.528146 214436 out.go:201]
W1209 23:20:29.530871 214436 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1209 23:20:29.530913 214436 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W1209 23:20:29.530934 214436 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W1209 23:20:29.530940 214436 out.go:270] *
W1209 23:20:29.531888 214436 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 23:20:29.534511 214436 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
26b3428efb807 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 877abb70d5e3f dashboard-metrics-scraper-8d5bb5db8-hmfdq
5cdaa2e6255fc ba04bb24b9575 5 minutes ago Running storage-provisioner 3 18a6972ae3661 storage-provisioner
f5e0a0afceebb 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 59bea17c71ea1 kubernetes-dashboard-cd95d586-9w2zp
f77b8039910f9 1611cd07b61d5 5 minutes ago Running busybox 1 bcc333eda2a9d busybox
dae54bdfe8b6d db91994f4ee8f 5 minutes ago Running coredns 1 698bb49131392 coredns-74ff55c5b-tz959
9c62f2e12bccb ba04bb24b9575 5 minutes ago Exited storage-provisioner 2 18a6972ae3661 storage-provisioner
3a123be1d317e 25a5233254979 5 minutes ago Running kube-proxy 1 ffa2eadb85d0e kube-proxy-d8xtk
394606f289ebf 2be0bcf609c65 5 minutes ago Running kindnet-cni 1 5c34aae934876 kindnet-8g8xl
9d1b42abf4137 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 5eca194230e16 kube-apiserver-old-k8s-version-098617
10660454cbd9d 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 76b6bd2de3b68 kube-controller-manager-old-k8s-version-098617
e3e7eabe1dad8 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 a19a7fb23b0ed kube-scheduler-old-k8s-version-098617
55743b620c44b 05b738aa1bc63 6 minutes ago Running etcd 1 0d636fcae49c7 etcd-old-k8s-version-098617
d39cebd99127e 1611cd07b61d5 6 minutes ago Exited busybox 0 7dad798c84a92 busybox
99d9ed2f5b230 db91994f4ee8f 8 minutes ago Exited coredns 0 4f6103318c3c0 coredns-74ff55c5b-tz959
5125ce4b5b492 2be0bcf609c65 8 minutes ago Exited kindnet-cni 0 cbb7820bf820e kindnet-8g8xl
a33fca2389d21 25a5233254979 8 minutes ago Exited kube-proxy 0 cf2ee86c3f32c kube-proxy-d8xtk
5693d8f440cbb e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 d51e2f1483e9b kube-scheduler-old-k8s-version-098617
063e1c49d2c94 05b738aa1bc63 8 minutes ago Exited etcd 0 5fd7016bca415 etcd-old-k8s-version-098617
f68628204e6f9 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 07b952f17d5ef kube-controller-manager-old-k8s-version-098617
6d1ffef5c3c11 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 84d5b210b79df kube-apiserver-old-k8s-version-098617
==> containerd <==
Dec 09 23:16:24 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:24.602539647Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Dec 09 23:16:24 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:24.602592537Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.598872230Z" level=info msg="CreateContainer within sandbox \"877abb70d5e3f16a0ef459d179dcfdfa8cbd958892fbaa225454017f2baf0042\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.622268230Z" level=info msg="CreateContainer within sandbox \"877abb70d5e3f16a0ef459d179dcfdfa8cbd958892fbaa225454017f2baf0042\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\""
Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.622946757Z" level=info msg="StartContainer for \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\""
Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.726144658Z" level=info msg="StartContainer for \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\" returns successfully"
Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.753508481Z" level=info msg="shim disconnected" id=da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080 namespace=k8s.io
Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.753848807Z" level=warning msg="cleaning up after shim disconnected" id=da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080 namespace=k8s.io
Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.753925400Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 09 23:16:27 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:27.765789375Z" level=warning msg="cleanup warnings time=\"2024-12-09T23:16:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Dec 09 23:16:28 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:28.527480164Z" level=info msg="RemoveContainer for \"6f5e80f40b735d4b9e3bcb78fb0271ab065a486528e08373cbcf421f7d57ec07\""
Dec 09 23:16:28 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:16:28.532360667Z" level=info msg="RemoveContainer for \"6f5e80f40b735d4b9e3bcb78fb0271ab065a486528e08373cbcf421f7d57ec07\" returns successfully"
Dec 09 23:17:53 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:53.596069074Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:17:53 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:53.603531922Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Dec 09 23:17:53 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:53.605212475Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Dec 09 23:17:53 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:53.605279181Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.596811809Z" level=info msg="CreateContainer within sandbox \"877abb70d5e3f16a0ef459d179dcfdfa8cbd958892fbaa225454017f2baf0042\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.607227318Z" level=info msg="CreateContainer within sandbox \"877abb70d5e3f16a0ef459d179dcfdfa8cbd958892fbaa225454017f2baf0042\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07\""
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.608050302Z" level=info msg="StartContainer for \"26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07\""
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.700069142Z" level=info msg="StartContainer for \"26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07\" returns successfully"
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.725522610Z" level=info msg="shim disconnected" id=26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07 namespace=k8s.io
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.725603642Z" level=warning msg="cleaning up after shim disconnected" id=26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07 namespace=k8s.io
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.725669545Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.804049711Z" level=info msg="RemoveContainer for \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\""
Dec 09 23:17:58 old-k8s-version-098617 containerd[568]: time="2024-12-09T23:17:58.808920426Z" level=info msg="RemoveContainer for \"da1a0b8be74aa037ece31c8c9b11c9746d4c6edfcc05b6aba6726603a8206080\" returns successfully"
==> coredns [99d9ed2f5b230d0319b2465314133899b5c950239a3b96d9f4feb405f1b18650] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:40089 - 34616 "HINFO IN 8891620888748784360.2156695852859657496. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013919513s
==> coredns [dae54bdfe8b6d504e8061ba13c99a15682b561cf9bca5574313dd3097076811b] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:56340 - 39104 "HINFO IN 1986183853171438515.8557334799197777396. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020908198s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I1209 23:15:14.125168 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-09 23:14:44.124585579 +0000 UTC m=+0.077622778) (total time: 30.00044794s):
Trace[2019727887]: [30.00044794s] [30.00044794s] END
E1209 23:15:14.125203 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I1209 23:15:14.125410 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-09 23:14:44.125035532 +0000 UTC m=+0.078072731) (total time: 30.00036132s):
Trace[939984059]: [30.00036132s] [30.00036132s] END
E1209 23:15:14.125424 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I1209 23:15:14.125712 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-09 23:14:44.125354181 +0000 UTC m=+0.078391388) (total time: 30.000334481s):
Trace[911902081]: [30.000334481s] [30.000334481s] END
E1209 23:15:14.125726 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> describe nodes <==
Name: old-k8s-version-098617
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-098617
kubernetes.io/os=linux
minikube.k8s.io/commit=bdb91ee97b7db1e27267ce5f380a98e3176548b5
minikube.k8s.io/name=old-k8s-version-098617
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_12_09T23_11_52_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 09 Dec 2024 23:11:48 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-098617
AcquireTime: <unset>
RenewTime: Mon, 09 Dec 2024 23:20:22 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 09 Dec 2024 23:15:30 +0000 Mon, 09 Dec 2024 23:11:42 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 09 Dec 2024 23:15:30 +0000 Mon, 09 Dec 2024 23:11:42 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 09 Dec 2024 23:15:30 +0000 Mon, 09 Dec 2024 23:11:42 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 09 Dec 2024 23:15:30 +0000 Mon, 09 Dec 2024 23:12:07 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-098617
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022296Ki
pods: 110
System Info:
Machine ID: db277c901b564588b69668d59c8f2e19
System UUID: 2b872bb4-9aec-411e-96f8-88189f87523b
Boot ID: 982d10f7-311f-4ebf-96b3-48403acdb647
Kernel Version: 5.15.0-1072-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m43s
kube-system coredns-74ff55c5b-tz959 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m24s
kube-system etcd-old-k8s-version-098617 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m31s
kube-system kindnet-8g8xl 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m24s
kube-system kube-apiserver-old-k8s-version-098617 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m31s
kube-system kube-controller-manager-old-k8s-version-098617 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m31s
kube-system kube-proxy-d8xtk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kube-system kube-scheduler-old-k8s-version-098617 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m31s
kube-system metrics-server-9975d5f86-4rw7k 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m32s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m23s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-hmfdq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
kubernetes-dashboard kubernetes-dashboard-cd95d586-9w2zp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m51s (x4 over 8m51s) kubelet Node old-k8s-version-098617 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m51s (x5 over 8m51s) kubelet Node old-k8s-version-098617 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m51s (x4 over 8m51s) kubelet Node old-k8s-version-098617 status is now: NodeHasSufficientPID
Normal Starting 8m31s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m31s kubelet Node old-k8s-version-098617 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m31s kubelet Node old-k8s-version-098617 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m31s kubelet Node old-k8s-version-098617 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m31s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m24s kubelet Node old-k8s-version-098617 status is now: NodeReady
Normal Starting 8m23s kube-proxy Starting kube-proxy.
Normal Starting 6m4s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m4s (x8 over 6m4s) kubelet Node old-k8s-version-098617 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m4s (x8 over 6m4s) kubelet Node old-k8s-version-098617 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m4s (x7 over 6m4s) kubelet Node old-k8s-version-098617 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m4s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m47s kube-proxy Starting kube-proxy.
==> dmesg <==
[Dec 9 22:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.013902] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.481128] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.026434] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.030455] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.016714] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.643686] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.085449] kauditd_printk_skb: 36 callbacks suppressed
[Dec 9 23:03] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
==> etcd [063e1c49d2c947a9f19b4fae6421961ea9a67ce263a258ff3303dcc0ab203f1c] <==
raft2024/12/09 23:11:42 INFO: ea7e25599daad906 became candidate at term 2
raft2024/12/09 23:11:42 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2024/12/09 23:11:42 INFO: ea7e25599daad906 became leader at term 2
raft2024/12/09 23:11:42 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2024-12-09 23:11:42.437698 I | etcdserver: published {Name:old-k8s-version-098617 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2024-12-09 23:11:42.437723 I | embed: ready to serve client requests
2024-12-09 23:11:42.439366 I | embed: serving client requests on 192.168.76.2:2379
2024-12-09 23:11:42.439444 I | embed: ready to serve client requests
2024-12-09 23:11:42.447624 I | etcdserver: setting up the initial cluster version to 3.4
2024-12-09 23:11:42.448191 N | etcdserver/membership: set the initial cluster version to 3.4
2024-12-09 23:11:42.461921 I | embed: serving client requests on 127.0.0.1:2379
2024-12-09 23:11:42.508996 I | etcdserver/api: enabled capabilities for version 3.4
2024-12-09 23:11:51.098935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:12:06.203215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:12:08.878609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:12:18.878016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:12:28.877906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:12:38.878044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:12:48.877825 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:12:58.878002 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:13:08.878019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:13:18.877886 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:13:28.877963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:13:38.877765 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:13:48.878021 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [55743b620c44bd40d4ac5faf5671b922b7011a15211b36e43cf05dad9e0fdbfd] <==
2024-12-09 23:16:23.538683 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:16:33.538659 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:16:43.538687 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:16:53.538682 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:17:03.538819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:17:13.538734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:17:23.538658 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:17:33.538892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:17:43.538640 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:17:53.539064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:18:03.538734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:18:13.538911 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:18:23.538787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:18:33.538687 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:18:43.538657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:18:53.538877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:19:03.539633 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:19:13.539471 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:19:23.538757 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:19:33.538837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:19:43.538679 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:19:53.538832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:20:03.538646 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:20:13.538644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 23:20:23.538779 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
23:20:32 up 1:02, 0 users, load average: 2.03, 2.27, 2.62
Linux old-k8s-version-098617 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [394606f289ebff6412cd8423f28e2ba1a7918b8e4eac2870a5c1825e8e571eee] <==
I1209 23:18:23.820304 1 main.go:301] handling current node
I1209 23:18:33.822944 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:18:33.822980 1 main.go:301] handling current node
I1209 23:18:43.814982 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:18:43.815046 1 main.go:301] handling current node
I1209 23:18:53.820290 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:18:53.820326 1 main.go:301] handling current node
I1209 23:19:03.823050 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:19:03.823081 1 main.go:301] handling current node
I1209 23:19:13.822927 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:19:13.822964 1 main.go:301] handling current node
I1209 23:19:23.818902 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:19:23.818943 1 main.go:301] handling current node
I1209 23:19:33.822794 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:19:33.822880 1 main.go:301] handling current node
I1209 23:19:43.815016 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:19:43.815053 1 main.go:301] handling current node
I1209 23:19:53.818593 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:19:53.818626 1 main.go:301] handling current node
I1209 23:20:03.823593 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:20:03.823626 1 main.go:301] handling current node
I1209 23:20:13.823606 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:20:13.823651 1 main.go:301] handling current node
I1209 23:20:23.818791 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:20:23.818943 1 main.go:301] handling current node
==> kindnet [5125ce4b5b492867a27c0ac7a7b0e99ee7d2c899aba434ce230c8fe5eb273f38] <==
I1209 23:12:12.103830 1 controller.go:365] Waiting for informer caches to sync
I1209 23:12:12.103836 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I1209 23:12:12.404010 1 shared_informer.go:320] Caches are synced for kube-network-policies
I1209 23:12:12.404103 1 metrics.go:61] Registering metrics
I1209 23:12:12.404202 1 controller.go:401] Syncing nftables rules
I1209 23:12:22.112365 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:12:22.112411 1 main.go:301] handling current node
I1209 23:12:32.103231 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:12:32.103272 1 main.go:301] handling current node
I1209 23:12:42.104180 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:12:42.104302 1 main.go:301] handling current node
I1209 23:12:52.108777 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:12:52.108811 1 main.go:301] handling current node
I1209 23:13:02.111878 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:13:02.111912 1 main.go:301] handling current node
I1209 23:13:12.103906 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:13:12.103939 1 main.go:301] handling current node
I1209 23:13:22.108837 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:13:22.108873 1 main.go:301] handling current node
I1209 23:13:32.112785 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:13:32.112821 1 main.go:301] handling current node
I1209 23:13:42.112681 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:13:42.112782 1 main.go:301] handling current node
I1209 23:13:52.103191 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1209 23:13:52.103223 1 main.go:301] handling current node
==> kube-apiserver [6d1ffef5c3c113df0c1c5643d627d680020df30e159ab4a69ccf738c6f7c09e6] <==
I1209 23:11:49.390946 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1209 23:11:49.390978 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1209 23:11:49.404120 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1209 23:11:49.407872 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1209 23:11:49.407898 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1209 23:11:49.899600 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1209 23:11:49.951414 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1209 23:11:50.053931 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I1209 23:11:50.055431 1 controller.go:606] quota admission added evaluator for: endpoints
I1209 23:11:50.061210 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1209 23:11:51.107531 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1209 23:11:51.584139 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1209 23:11:51.682149 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1209 23:12:00.203482 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1209 23:12:07.184429 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1209 23:12:07.207144 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1209 23:12:17.509983 1 client.go:360] parsed scheme: "passthrough"
I1209 23:12:17.510263 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:12:17.510282 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 23:13:01.769153 1 client.go:360] parsed scheme: "passthrough"
I1209 23:13:01.769363 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:13:01.769434 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 23:13:33.961383 1 client.go:360] parsed scheme: "passthrough"
I1209 23:13:33.961428 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:13:33.961461 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [9d1b42abf41370b358213f4f369435098a79f49c98a0b87c365dbfb7068093df] <==
I1209 23:16:59.541165 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:16:59.541202 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1209 23:17:43.931521 1 handler_proxy.go:102] no RequestInfo found in the context
E1209 23:17:43.931608 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1209 23:17:43.931625 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1209 23:17:44.386193 1 client.go:360] parsed scheme: "passthrough"
I1209 23:17:44.386384 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:17:44.386476 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 23:18:15.389343 1 client.go:360] parsed scheme: "passthrough"
I1209 23:18:15.389531 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:18:15.389551 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 23:18:48.092210 1 client.go:360] parsed scheme: "passthrough"
I1209 23:18:48.092268 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:18:48.092278 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 23:19:28.553359 1 client.go:360] parsed scheme: "passthrough"
I1209 23:19:28.553407 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:19:28.553418 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1209 23:19:40.417661 1 handler_proxy.go:102] no RequestInfo found in the context
E1209 23:19:40.417735 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1209 23:19:40.417752 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1209 23:20:01.812268 1 client.go:360] parsed scheme: "passthrough"
I1209 23:20:01.812318 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 23:20:01.812338 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [10660454cbd9d4da094cb8f100e7feceef0b146ea7a208113cba972405412cf4] <==
E1209 23:16:28.204822 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 23:16:36.143200 1 request.go:655] Throttling request took 1.047762695s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1209 23:16:36.994628 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 23:16:58.708378 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 23:17:08.645311 1 request.go:655] Throttling request took 1.045506116s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
W1209 23:17:09.496536 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 23:17:29.211862 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 23:17:41.147090 1 request.go:655] Throttling request took 1.048146193s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
W1209 23:17:41.998477 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 23:17:59.714009 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 23:18:13.648902 1 request.go:655] Throttling request took 1.048271957s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1209 23:18:14.500370 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 23:18:30.215916 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 23:18:46.151073 1 request.go:655] Throttling request took 1.046410694s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1beta1?timeout=32s
W1209 23:18:47.002646 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 23:19:00.721006 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 23:19:18.653095 1 request.go:655] Throttling request took 1.048506806s, request: GET:https://192.168.76.2:8443/apis/apps/v1?timeout=32s
W1209 23:19:19.504961 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 23:19:31.222929 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 23:19:51.155286 1 request.go:655] Throttling request took 1.048237346s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1?timeout=32s
W1209 23:19:52.006944 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 23:20:01.725731 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 23:20:23.658185 1 request.go:655] Throttling request took 1.04493748s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1209 23:20:24.509801 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 23:20:32.227516 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-controller-manager [f68628204e6f96e082e5b48a6dc631b0a69e7de46bf5da75a9ca3e6911da3442] <==
I1209 23:12:07.166862 1 shared_informer.go:247] Caches are synced for daemon sets
I1209 23:12:07.178498 1 shared_informer.go:247] Caches are synced for attach detach
I1209 23:12:07.217068 1 shared_informer.go:247] Caches are synced for endpoint_slice
I1209 23:12:07.222308 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I1209 23:12:07.234451 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tz959"
I1209 23:12:07.272105 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-86hlm"
I1209 23:12:07.284225 1 shared_informer.go:247] Caches are synced for ReplicationController
I1209 23:12:07.284288 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d8xtk"
I1209 23:12:07.284301 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8g8xl"
I1209 23:12:07.303365 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I1209 23:12:07.303514 1 shared_informer.go:247] Caches are synced for resource quota
I1209 23:12:07.324840 1 shared_informer.go:247] Caches are synced for stateful set
I1209 23:12:07.341260 1 shared_informer.go:247] Caches are synced for disruption
I1209 23:12:07.341286 1 disruption.go:339] Sending events to api server.
I1209 23:12:07.510097 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E1209 23:12:07.649252 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"80b2b055-de84-4bea-9f10-0df319d00f9e", ResourceVersion:"412", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63869382711, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f9e020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f9e040)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f9e060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f9e080)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000f9e100), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001b34f80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f9e120), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f9e140), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f9e180)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40004748a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001513ad8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004f0070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000114b20)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001513b28)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
I1209 23:12:07.710285 1 shared_informer.go:247] Caches are synced for garbage collector
I1209 23:12:07.731074 1 shared_informer.go:247] Caches are synced for garbage collector
I1209 23:12:07.731102 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1209 23:12:07.750455 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1209 23:12:07.750496 1 shared_informer.go:247] Caches are synced for resource quota
I1209 23:12:08.194127 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I1209 23:12:08.212381 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-86hlm"
I1209 23:12:12.105707 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I1209 23:13:57.984954 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
==> kube-proxy [3a123be1d317e1e4f654bafa3493726c60356903a91f6e64d7b29782641f2d50] <==
I1209 23:14:43.986481 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1209 23:14:43.986550 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1209 23:14:44.159585 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1209 23:14:44.159684 1 server_others.go:185] Using iptables Proxier.
I1209 23:14:44.159913 1 server.go:650] Version: v1.20.0
I1209 23:14:44.160404 1 config.go:315] Starting service config controller
I1209 23:14:44.160421 1 shared_informer.go:240] Waiting for caches to sync for service config
I1209 23:14:44.184838 1 config.go:224] Starting endpoint slice config controller
I1209 23:14:44.184868 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1209 23:14:44.264813 1 shared_informer.go:247] Caches are synced for service config
I1209 23:14:44.287243 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [a33fca2389d21d809231c03f4d59c7c6edd2b935f0a6bee69e06642b5d121aad] <==
I1209 23:12:08.695219 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1209 23:12:08.695473 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1209 23:12:08.715529 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1209 23:12:08.715607 1 server_others.go:185] Using iptables Proxier.
I1209 23:12:08.715815 1 server.go:650] Version: v1.20.0
I1209 23:12:08.716306 1 config.go:315] Starting service config controller
I1209 23:12:08.716321 1 shared_informer.go:240] Waiting for caches to sync for service config
I1209 23:12:08.718325 1 config.go:224] Starting endpoint slice config controller
I1209 23:12:08.718343 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1209 23:12:08.817732 1 shared_informer.go:247] Caches are synced for service config
I1209 23:12:08.818600 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [5693d8f440cbbfff0094faddfc750157e27b436336b58b60026d6e4b6afb7c19] <==
I1209 23:11:44.274547 1 serving.go:331] Generated self-signed cert in-memory
W1209 23:11:48.590660 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1209 23:11:48.590779 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1209 23:11:48.590930 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1209 23:11:48.590939 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1209 23:11:48.649529 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1209 23:11:48.651872 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 23:11:48.651908 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 23:11:48.651926 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1209 23:11:48.674683 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1209 23:11:48.675978 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1209 23:11:48.677468 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1209 23:11:48.677892 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1209 23:11:48.678178 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1209 23:11:48.678444 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1209 23:11:48.678852 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1209 23:11:48.683030 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1209 23:11:48.684866 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1209 23:11:48.685025 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1209 23:11:48.685125 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1209 23:11:48.685305 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1209 23:11:49.758882 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I1209 23:11:52.852067 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [e3e7eabe1dad882678b1f0f2e7a8cff160d9b5e4146196f53ed8533082a0103e] <==
I1209 23:14:33.729740 1 serving.go:331] Generated self-signed cert in-memory
W1209 23:14:39.387314 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1209 23:14:39.387541 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1209 23:14:39.387668 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1209 23:14:39.387743 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1209 23:14:40.013274 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1209 23:14:40.023341 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 23:14:40.023366 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 23:14:40.023395 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1209 23:14:40.231339 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Dec 09 23:18:52 old-k8s-version-098617 kubelet[661]: E1209 23:18:52.594642 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:18:56 old-k8s-version-098617 kubelet[661]: E1209 23:18:56.595179 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:19:05 old-k8s-version-098617 kubelet[661]: I1209 23:19:05.594480 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
Dec 09 23:19:05 old-k8s-version-098617 kubelet[661]: E1209 23:19:05.595377 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:19:09 old-k8s-version-098617 kubelet[661]: E1209 23:19:09.601501 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:19:17 old-k8s-version-098617 kubelet[661]: I1209 23:19:17.598517 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
Dec 09 23:19:17 old-k8s-version-098617 kubelet[661]: E1209 23:19:17.599504 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:19:23 old-k8s-version-098617 kubelet[661]: E1209 23:19:23.595388 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:19:28 old-k8s-version-098617 kubelet[661]: I1209 23:19:28.594351 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
Dec 09 23:19:28 old-k8s-version-098617 kubelet[661]: E1209 23:19:28.594771 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:19:34 old-k8s-version-098617 kubelet[661]: E1209 23:19:34.595081 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: I1209 23:19:40.594408 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
Dec 09 23:19:40 old-k8s-version-098617 kubelet[661]: E1209 23:19:40.595347 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:19:48 old-k8s-version-098617 kubelet[661]: E1209 23:19:48.595049 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: I1209 23:19:54.594470 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
Dec 09 23:19:54 old-k8s-version-098617 kubelet[661]: E1209 23:19:54.595330 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:20:01 old-k8s-version-098617 kubelet[661]: E1209 23:20:01.601974 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: I1209 23:20:09.599508 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
Dec 09 23:20:09 old-k8s-version-098617 kubelet[661]: E1209 23:20:09.599956 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:20:12 old-k8s-version-098617 kubelet[661]: E1209 23:20:12.596822 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:20:20 old-k8s-version-098617 kubelet[661]: I1209 23:20:20.594389 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
Dec 09 23:20:20 old-k8s-version-098617 kubelet[661]: E1209 23:20:20.594826 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
Dec 09 23:20:27 old-k8s-version-098617 kubelet[661]: E1209 23:20:27.596077 661 pod_workers.go:191] Error syncing pod b58c211f-4135-4d72-a8eb-a915eea73d96 ("metrics-server-9975d5f86-4rw7k_kube-system(b58c211f-4135-4d72-a8eb-a915eea73d96)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 23:20:32 old-k8s-version-098617 kubelet[661]: I1209 23:20:32.594484 661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 26b3428efb80789472c470c3dfe33d2069cefb3c11164ae469638814f3870a07
Dec 09 23:20:32 old-k8s-version-098617 kubelet[661]: E1209 23:20:32.595347 661 pod_workers.go:191] Error syncing pod e414ccd5-083f-4d8f-9bff-54536e37be09 ("dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-hmfdq_kubernetes-dashboard(e414ccd5-083f-4d8f-9bff-54536e37be09)"
==> kubernetes-dashboard [f5e0a0afceebb969d8da6457fcca1f6b9964499a31fb1842750ccb5a3884ddf2] <==
2024/12/09 23:15:06 Using namespace: kubernetes-dashboard
2024/12/09 23:15:06 Using in-cluster config to connect to apiserver
2024/12/09 23:15:06 Using secret token for csrf signing
2024/12/09 23:15:06 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/12/09 23:15:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/12/09 23:15:06 Successful initial request to the apiserver, version: v1.20.0
2024/12/09 23:15:06 Generating JWE encryption key
2024/12/09 23:15:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/12/09 23:15:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/12/09 23:15:06 Initializing JWE encryption key from synchronized object
2024/12/09 23:15:06 Creating in-cluster Sidecar client
2024/12/09 23:15:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:15:06 Serving insecurely on HTTP port: 9090
2024/12/09 23:15:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:16:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:16:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:17:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:17:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:18:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:18:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:19:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:19:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:20:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 23:15:06 Starting overwatch
==> storage-provisioner [5cdaa2e6255fc4d96282bee6cb565b6257b645c8f8ad628144066bc11b36d0aa] <==
I1209 23:15:29.768665 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1209 23:15:29.782091 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1209 23:15:29.784787 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1209 23:15:47.274163 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1209 23:15:47.274660 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7399b432-4f3e-4b63-af8d-3d8a1903dbca", APIVersion:"v1", ResourceVersion:"838", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-098617_0ad3c416-968b-49ab-9b3e-7b8d2929554c became leader
I1209 23:15:47.279922 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-098617_0ad3c416-968b-49ab-9b3e-7b8d2929554c!
I1209 23:15:47.381638 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-098617_0ad3c416-968b-49ab-9b3e-7b8d2929554c!
==> storage-provisioner [9c62f2e12bccb234691d9df725b23072f6bf214069ff068aea47352ec6a1ef2d] <==
I1209 23:14:43.564479 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1209 23:15:13.567348 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098617 -n old-k8s-version-098617
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-098617 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-4rw7k
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-098617 describe pod metrics-server-9975d5f86-4rw7k
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-098617 describe pod metrics-server-9975d5f86-4rw7k: exit status 1 (108.791905ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-4rw7k" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-098617 describe pod metrics-server-9975d5f86-4rw7k: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (382.58s)