=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-066167 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E1204 23:59:33.880519 7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/functional-876483/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:59:45.366620 7736 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/addons-458020/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-066167 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m17.162435798s)
-- stdout --
* [old-k8s-version-066167] minikube v1.34.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20045
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-066167" primary control-plane node in "old-k8s-version-066167" cluster
* Pulling base image v0.0.45-1730888964-19917 ...
* Restarting existing docker container for "old-k8s-version-066167" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
* Verifying Kubernetes components...
- Using image registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-066167 addons enable metrics-server
* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
-- /stdout --
** stderr **
I1204 23:58:52.147575 216030 out.go:345] Setting OutFile to fd 1 ...
I1204 23:58:52.147731 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:58:52.147745 216030 out.go:358] Setting ErrFile to fd 2...
I1204 23:58:52.147750 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:58:52.148163 216030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
I1204 23:58:52.148653 216030 out.go:352] Setting JSON to false
I1204 23:58:52.151539 216030 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6083,"bootTime":1733350650,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1204 23:58:52.151628 216030 start.go:139] virtualization:
I1204 23:58:52.155307 216030 out.go:177] * [old-k8s-version-066167] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1204 23:58:52.158922 216030 out.go:177] - MINIKUBE_LOCATION=20045
I1204 23:58:52.158998 216030 notify.go:220] Checking for updates...
I1204 23:58:52.166845 216030 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1204 23:58:52.169698 216030 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
I1204 23:58:52.172369 216030 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
I1204 23:58:52.175093 216030 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1204 23:58:52.177697 216030 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1204 23:58:52.180862 216030 config.go:182] Loaded profile config "old-k8s-version-066167": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1204 23:58:52.184261 216030 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
I1204 23:58:52.187008 216030 driver.go:394] Setting default libvirt URI to qemu:///system
I1204 23:58:52.230847 216030 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1204 23:58:52.230955 216030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1204 23:58:52.318578 216030 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-04 23:58:52.309551212 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1204 23:58:52.318686 216030 docker.go:318] overlay module found
I1204 23:58:52.321786 216030 out.go:177] * Using the docker driver based on existing profile
I1204 23:58:52.324627 216030 start.go:297] selected driver: docker
I1204 23:58:52.324647 216030 start.go:901] validating driver "docker" against &{Name:old-k8s-version-066167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1204 23:58:52.324767 216030 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1204 23:58:52.325595 216030 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1204 23:58:52.401669 216030 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-04 23:58:52.390292228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1204 23:58:52.402062 216030 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1204 23:58:52.402096 216030 cni.go:84] Creating CNI manager for ""
I1204 23:58:52.402147 216030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1204 23:58:52.402190 216030 start.go:340] cluster config:
{Name:old-k8s-version-066167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1204 23:58:52.405202 216030 out.go:177] * Starting "old-k8s-version-066167" primary control-plane node in "old-k8s-version-066167" cluster
I1204 23:58:52.407862 216030 cache.go:121] Beginning downloading kic base image for docker with containerd
I1204 23:58:52.410492 216030 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
I1204 23:58:52.413203 216030 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1204 23:58:52.413264 216030 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I1204 23:58:52.413278 216030 cache.go:56] Caching tarball of preloaded images
I1204 23:58:52.413365 216030 preload.go:172] Found /home/jenkins/minikube-integration/20045-2283/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1204 23:58:52.413381 216030 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I1204 23:58:52.413502 216030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/config.json ...
I1204 23:58:52.413721 216030 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
I1204 23:58:52.442308 216030 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
I1204 23:58:52.442328 216030 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
I1204 23:58:52.442348 216030 cache.go:194] Successfully downloaded all kic artifacts
I1204 23:58:52.442380 216030 start.go:360] acquireMachinesLock for old-k8s-version-066167: {Name:mk44188120fe7b51da9a5c75c3fca881cdcbfcb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 23:58:52.442446 216030 start.go:364] duration metric: took 48.098µs to acquireMachinesLock for "old-k8s-version-066167"
I1204 23:58:52.442468 216030 start.go:96] Skipping create...Using existing machine configuration
I1204 23:58:52.442473 216030 fix.go:54] fixHost starting:
I1204 23:58:52.442722 216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
I1204 23:58:52.467466 216030 fix.go:112] recreateIfNeeded on old-k8s-version-066167: state=Stopped err=<nil>
W1204 23:58:52.467493 216030 fix.go:138] unexpected machine state, will restart: <nil>
I1204 23:58:52.470419 216030 out.go:177] * Restarting existing docker container for "old-k8s-version-066167" ...
I1204 23:58:52.473331 216030 cli_runner.go:164] Run: docker start old-k8s-version-066167
I1204 23:58:52.820226 216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
I1204 23:58:52.849840 216030 kic.go:430] container "old-k8s-version-066167" state is running.
I1204 23:58:52.850253 216030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-066167
I1204 23:58:52.882551 216030 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/config.json ...
I1204 23:58:52.882768 216030 machine.go:93] provisionDockerMachine start ...
I1204 23:58:52.882824 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:58:52.907667 216030 main.go:141] libmachine: Using SSH client type: native
I1204 23:58:52.908110 216030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I1204 23:58:52.908127 216030 main.go:141] libmachine: About to run SSH command:
hostname
I1204 23:58:52.908875 216030 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1204 23:58:56.037068 216030 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-066167
I1204 23:58:56.037126 216030 ubuntu.go:169] provisioning hostname "old-k8s-version-066167"
I1204 23:58:56.037221 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:58:56.060011 216030 main.go:141] libmachine: Using SSH client type: native
I1204 23:58:56.060265 216030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I1204 23:58:56.060283 216030 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-066167 && echo "old-k8s-version-066167" | sudo tee /etc/hostname
I1204 23:58:56.217306 216030 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-066167
I1204 23:58:56.217389 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:58:56.254523 216030 main.go:141] libmachine: Using SSH client type: native
I1204 23:58:56.254790 216030 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil> [] 0s} 127.0.0.1 33063 <nil> <nil>}
I1204 23:58:56.254808 216030 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-066167' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-066167/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-066167' | sudo tee -a /etc/hosts;
fi
fi
I1204 23:58:56.405168 216030 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1204 23:58:56.405196 216030 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20045-2283/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-2283/.minikube}
I1204 23:58:56.405219 216030 ubuntu.go:177] setting up certificates
I1204 23:58:56.405229 216030 provision.go:84] configureAuth start
I1204 23:58:56.405296 216030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-066167
I1204 23:58:56.424866 216030 provision.go:143] copyHostCerts
I1204 23:58:56.424939 216030 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem, removing ...
I1204 23:58:56.424952 216030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem
I1204 23:58:56.425031 216030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem (1082 bytes)
I1204 23:58:56.425175 216030 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem, removing ...
I1204 23:58:56.425182 216030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem
I1204 23:58:56.425212 216030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem (1123 bytes)
I1204 23:58:56.425276 216030 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem, removing ...
I1204 23:58:56.425281 216030 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem
I1204 23:58:56.425305 216030 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem (1679 bytes)
I1204 23:58:56.425361 216030 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-066167 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-066167]
I1204 23:58:57.214859 216030 provision.go:177] copyRemoteCerts
I1204 23:58:57.214980 216030 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1204 23:58:57.215054 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:58:57.234639 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:58:57.326684 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1204 23:58:57.353250 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1204 23:58:57.379765 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1204 23:58:57.406070 216030 provision.go:87] duration metric: took 1.000826036s to configureAuth
I1204 23:58:57.406099 216030 ubuntu.go:193] setting minikube options for container-runtime
I1204 23:58:57.406278 216030 config.go:182] Loaded profile config "old-k8s-version-066167": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1204 23:58:57.406292 216030 machine.go:96] duration metric: took 4.523516097s to provisionDockerMachine
I1204 23:58:57.406300 216030 start.go:293] postStartSetup for "old-k8s-version-066167" (driver="docker")
I1204 23:58:57.406311 216030 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1204 23:58:57.406375 216030 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1204 23:58:57.406422 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:58:57.430782 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:58:57.523014 216030 ssh_runner.go:195] Run: cat /etc/os-release
I1204 23:58:57.526685 216030 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1204 23:58:57.526723 216030 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1204 23:58:57.526733 216030 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1204 23:58:57.526741 216030 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1204 23:58:57.526754 216030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-2283/.minikube/addons for local assets ...
I1204 23:58:57.526812 216030 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-2283/.minikube/files for local assets ...
I1204 23:58:57.526906 216030 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem -> 77362.pem in /etc/ssl/certs
I1204 23:58:57.527017 216030 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1204 23:58:57.536555 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem --> /etc/ssl/certs/77362.pem (1708 bytes)
I1204 23:58:57.562577 216030 start.go:296] duration metric: took 156.261441ms for postStartSetup
I1204 23:58:57.562661 216030 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1204 23:58:57.562712 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:58:57.580758 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:58:57.667417 216030 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1204 23:58:57.672629 216030 fix.go:56] duration metric: took 5.23014743s for fixHost
I1204 23:58:57.672651 216030 start.go:83] releasing machines lock for "old-k8s-version-066167", held for 5.230196069s
I1204 23:58:57.672722 216030 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-066167
I1204 23:58:57.691473 216030 ssh_runner.go:195] Run: cat /version.json
I1204 23:58:57.691546 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:58:57.691794 216030 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1204 23:58:57.691895 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:58:57.722536 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:58:57.731869 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:58:57.816677 216030 ssh_runner.go:195] Run: systemctl --version
I1204 23:58:57.961593 216030 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1204 23:58:57.966186 216030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1204 23:58:57.990433 216030 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1204 23:58:57.990516 216030 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1204 23:58:57.999908 216030 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1204 23:58:57.999932 216030 start.go:495] detecting cgroup driver to use...
I1204 23:58:57.999962 216030 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1204 23:58:58.000016 216030 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1204 23:58:58.015744 216030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1204 23:58:58.031207 216030 docker.go:217] disabling cri-docker service (if available) ...
I1204 23:58:58.031273 216030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1204 23:58:58.046736 216030 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1204 23:58:58.061583 216030 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1204 23:58:58.177083 216030 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1204 23:58:58.268488 216030 docker.go:233] disabling docker service ...
I1204 23:58:58.268556 216030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1204 23:58:58.285059 216030 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1204 23:58:58.297197 216030 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1204 23:58:58.418211 216030 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1204 23:58:58.540390 216030 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1204 23:58:58.554118 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1204 23:58:58.571707 216030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1204 23:58:58.582154 216030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1204 23:58:58.592638 216030 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1204 23:58:58.592707 216030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1204 23:58:58.603254 216030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1204 23:58:58.613372 216030 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1204 23:58:58.623832 216030 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1204 23:58:58.634238 216030 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1204 23:58:58.643764 216030 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1204 23:58:58.654770 216030 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1204 23:58:58.664172 216030 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1204 23:58:58.673293 216030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1204 23:58:58.773943 216030 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1204 23:58:58.984351 216030 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1204 23:58:58.984465 216030 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1204 23:58:58.988060 216030 start.go:563] Will wait 60s for crictl version
I1204 23:58:58.988165 216030 ssh_runner.go:195] Run: which crictl
I1204 23:58:58.991891 216030 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1204 23:58:59.057947 216030 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1204 23:58:59.058086 216030 ssh_runner.go:195] Run: containerd --version
I1204 23:58:59.079788 216030 ssh_runner.go:195] Run: containerd --version
I1204 23:58:59.105449 216030 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
I1204 23:58:59.107155 216030 cli_runner.go:164] Run: docker network inspect old-k8s-version-066167 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1204 23:58:59.120879 216030 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1204 23:58:59.124683 216030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1204 23:58:59.135009 216030 kubeadm.go:883] updating cluster {Name:old-k8s-version-066167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1204 23:58:59.135126 216030 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1204 23:58:59.135183 216030 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 23:58:59.190673 216030 containerd.go:627] all images are preloaded for containerd runtime.
I1204 23:58:59.190693 216030 containerd.go:534] Images already preloaded, skipping extraction
I1204 23:58:59.190752 216030 ssh_runner.go:195] Run: sudo crictl images --output json
I1204 23:58:59.234335 216030 containerd.go:627] all images are preloaded for containerd runtime.
I1204 23:58:59.234408 216030 cache_images.go:84] Images are preloaded, skipping loading
I1204 23:58:59.234428 216030 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I1204 23:58:59.234562 216030 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-066167 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1204 23:58:59.234647 216030 ssh_runner.go:195] Run: sudo crictl info
I1204 23:58:59.280713 216030 cni.go:84] Creating CNI manager for ""
I1204 23:58:59.280739 216030 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1204 23:58:59.280750 216030 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1204 23:58:59.280769 216030 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-066167 NodeName:old-k8s-version-066167 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1204 23:58:59.280909 216030 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-066167"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1204 23:58:59.280978 216030 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I1204 23:58:59.289856 216030 binaries.go:44] Found k8s binaries, skipping transfer
I1204 23:58:59.289968 216030 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1204 23:58:59.298778 216030 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I1204 23:58:59.316060 216030 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1204 23:58:59.333738 216030 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I1204 23:58:59.350993 216030 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1204 23:58:59.354415 216030 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1204 23:58:59.364450 216030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1204 23:58:59.468283 216030 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1204 23:58:59.482569 216030 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167 for IP: 192.168.76.2
I1204 23:58:59.482640 216030 certs.go:194] generating shared ca certs ...
I1204 23:58:59.482669 216030 certs.go:226] acquiring lock for ca certs: {Name:mk1d98569ca320b9ee7e00b709eb6c9a159130d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1204 23:58:59.482853 216030 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.key
I1204 23:58:59.482921 216030 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.key
I1204 23:58:59.482942 216030 certs.go:256] generating profile certs ...
I1204 23:58:59.483058 216030 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/client.key
I1204 23:58:59.483142 216030 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/apiserver.key.e0d61a35
I1204 23:58:59.483217 216030 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/proxy-client.key
I1204 23:58:59.483379 216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736.pem (1338 bytes)
W1204 23:58:59.483432 216030 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736_empty.pem, impossibly tiny 0 bytes
I1204 23:58:59.483455 216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem (1675 bytes)
I1204 23:58:59.483509 216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem (1082 bytes)
I1204 23:58:59.483557 216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem (1123 bytes)
I1204 23:58:59.483611 216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem (1679 bytes)
I1204 23:58:59.483685 216030 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem (1708 bytes)
I1204 23:58:59.484366 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1204 23:58:59.515037 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1204 23:58:59.555595 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1204 23:58:59.603752 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1204 23:58:59.687048 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I1204 23:58:59.717677 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1204 23:58:59.743289 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1204 23:58:59.767997 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/old-k8s-version-066167/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1204 23:58:59.793530 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem --> /usr/share/ca-certificates/77362.pem (1708 bytes)
I1204 23:58:59.819454 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1204 23:58:59.845346 216030 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736.pem --> /usr/share/ca-certificates/7736.pem (1338 bytes)
I1204 23:58:59.872268 216030 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1204 23:58:59.891781 216030 ssh_runner.go:195] Run: openssl version
I1204 23:58:59.897699 216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77362.pem && ln -fs /usr/share/ca-certificates/77362.pem /etc/ssl/certs/77362.pem"
I1204 23:58:59.907707 216030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77362.pem
I1204 23:58:59.911602 216030 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 4 23:19 /usr/share/ca-certificates/77362.pem
I1204 23:58:59.911715 216030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77362.pem
I1204 23:58:59.918842 216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77362.pem /etc/ssl/certs/3ec20f2e.0"
I1204 23:58:59.928547 216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1204 23:58:59.938897 216030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1204 23:58:59.942511 216030 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 4 23:11 /usr/share/ca-certificates/minikubeCA.pem
I1204 23:58:59.942620 216030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1204 23:58:59.949635 216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1204 23:58:59.958884 216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7736.pem && ln -fs /usr/share/ca-certificates/7736.pem /etc/ssl/certs/7736.pem"
I1204 23:58:59.968322 216030 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7736.pem
I1204 23:58:59.972166 216030 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 4 23:19 /usr/share/ca-certificates/7736.pem
I1204 23:58:59.972284 216030 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7736.pem
I1204 23:58:59.979460 216030 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7736.pem /etc/ssl/certs/51391683.0"
I1204 23:58:59.988896 216030 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1204 23:58:59.992673 216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1204 23:58:59.999736 216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1204 23:59:00.006994 216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1204 23:59:00.014914 216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1204 23:59:00.023473 216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1204 23:59:00.032184 216030 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1204 23:59:00.040698 216030 kubeadm.go:392] StartCluster: {Name:old-k8s-version-066167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-066167 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1204 23:59:00.040874 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1204 23:59:00.040990 216030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1204 23:59:00.143884 216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1204 23:59:00.144254 216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1204 23:59:00.144283 216030 cri.go:89] found id: "784529bd212fc0a79c877ec4e2c6446e0ea31c9805d13332863fc4f0e39cf480"
I1204 23:59:00.144322 216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1204 23:59:00.144339 216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1204 23:59:00.144361 216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1204 23:59:00.144381 216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1204 23:59:00.144408 216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1204 23:59:00.144434 216030 cri.go:89] found id: ""
I1204 23:59:00.144538 216030 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1204 23:59:00.164485 216030 cri.go:116] JSON = null
W1204 23:59:00.164595 216030 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I1204 23:59:00.164729 216030 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1204 23:59:00.178310 216030 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1204 23:59:00.178397 216030 kubeadm.go:593] restartPrimaryControlPlane start ...
I1204 23:59:00.178489 216030 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1204 23:59:00.191190 216030 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1204 23:59:00.191806 216030 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-066167" does not appear in /home/jenkins/minikube-integration/20045-2283/kubeconfig
I1204 23:59:00.192040 216030 kubeconfig.go:62] /home/jenkins/minikube-integration/20045-2283/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-066167" cluster setting kubeconfig missing "old-k8s-version-066167" context setting]
I1204 23:59:00.192450 216030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-2283/kubeconfig: {Name:mka3b7dd57c7b1524b8db81fd47d2a503644c81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1204 23:59:00.194567 216030 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1204 23:59:00.206583 216030 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I1204 23:59:00.206671 216030 kubeadm.go:597] duration metric: took 28.252334ms to restartPrimaryControlPlane
I1204 23:59:00.206700 216030 kubeadm.go:394] duration metric: took 166.012363ms to StartCluster
I1204 23:59:00.206746 216030 settings.go:142] acquiring lock: {Name:mkf88c0c5090e30b7bb8c2e4a8e4f7c9dd68316c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1204 23:59:00.206971 216030 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20045-2283/kubeconfig
I1204 23:59:00.207718 216030 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-2283/kubeconfig: {Name:mka3b7dd57c7b1524b8db81fd47d2a503644c81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1204 23:59:00.208137 216030 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1204 23:59:00.208652 216030 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1204 23:59:00.208774 216030 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-066167"
I1204 23:59:00.208798 216030 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-066167"
W1204 23:59:00.208812 216030 addons.go:243] addon storage-provisioner should already be in state true
I1204 23:59:00.208841 216030 host.go:66] Checking if "old-k8s-version-066167" exists ...
I1204 23:59:00.209026 216030 config.go:182] Loaded profile config "old-k8s-version-066167": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1204 23:59:00.209169 216030 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-066167"
I1204 23:59:00.209214 216030 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-066167"
I1204 23:59:00.209377 216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
I1204 23:59:00.209578 216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
I1204 23:59:00.217218 216030 addons.go:69] Setting dashboard=true in profile "old-k8s-version-066167"
I1204 23:59:00.217258 216030 addons.go:234] Setting addon dashboard=true in "old-k8s-version-066167"
W1204 23:59:00.217267 216030 addons.go:243] addon dashboard should already be in state true
I1204 23:59:00.217305 216030 host.go:66] Checking if "old-k8s-version-066167" exists ...
I1204 23:59:00.217806 216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
I1204 23:59:00.218031 216030 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-066167"
I1204 23:59:00.218066 216030 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-066167"
W1204 23:59:00.218105 216030 addons.go:243] addon metrics-server should already be in state true
I1204 23:59:00.218164 216030 host.go:66] Checking if "old-k8s-version-066167" exists ...
I1204 23:59:00.218695 216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
I1204 23:59:00.225241 216030 out.go:177] * Verifying Kubernetes components...
I1204 23:59:00.226609 216030 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1204 23:59:00.277895 216030 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1204 23:59:00.279274 216030 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1204 23:59:00.281076 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1204 23:59:00.283039 216030 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1204 23:59:00.283150 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:59:00.302109 216030 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1204 23:59:00.304710 216030 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-066167"
W1204 23:59:00.304744 216030 addons.go:243] addon default-storageclass should already be in state true
I1204 23:59:00.304773 216030 host.go:66] Checking if "old-k8s-version-066167" exists ...
I1204 23:59:00.305336 216030 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1204 23:59:00.305357 216030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1204 23:59:00.305433 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:59:00.309401 216030 cli_runner.go:164] Run: docker container inspect old-k8s-version-066167 --format={{.State.Status}}
I1204 23:59:00.323428 216030 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1204 23:59:00.326473 216030 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1204 23:59:00.326509 216030 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1204 23:59:00.326640 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:59:00.377541 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:59:00.394965 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:59:00.404504 216030 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1204 23:59:00.404526 216030 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1204 23:59:00.404603 216030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-066167
I1204 23:59:00.415685 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:59:00.435901 216030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/old-k8s-version-066167/id_rsa Username:docker}
I1204 23:59:00.526674 216030 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1204 23:59:00.571389 216030 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-066167" to be "Ready" ...
I1204 23:59:00.593433 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1204 23:59:00.593454 216030 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1204 23:59:00.627958 216030 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1204 23:59:00.628029 216030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1204 23:59:00.652051 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1204 23:59:00.652123 216030 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1204 23:59:00.672452 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1204 23:59:00.679452 216030 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1204 23:59:00.679521 216030 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1204 23:59:00.695610 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1204 23:59:00.723528 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1204 23:59:00.723600 216030 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1204 23:59:00.751402 216030 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1204 23:59:00.751476 216030 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1204 23:59:00.799496 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1204 23:59:00.799569 216030 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1204 23:59:00.860187 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1204 23:59:00.921012 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1204 23:59:00.921086 216030 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W1204 23:59:01.024288 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.024379 216030 retry.go:31] will retry after 312.184752ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:01.034387 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.034488 216030 retry.go:31] will retry after 322.762797ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.040090 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1204 23:59:01.040163 216030 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
W1204 23:59:01.093721 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.093757 216030 retry.go:31] will retry after 244.927607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.095596 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1204 23:59:01.095619 216030 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1204 23:59:01.128097 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1204 23:59:01.128173 216030 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1204 23:59:01.151401 216030 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1204 23:59:01.151431 216030 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1204 23:59:01.181872 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1204 23:59:01.337564 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1204 23:59:01.339384 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1204 23:59:01.358024 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1204 23:59:01.400453 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.400483 216030 retry.go:31] will retry after 178.135322ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.579375 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1204 23:59:01.913184 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.913224 216030 retry.go:31] will retry after 518.189037ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:01.946000 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:01.946032 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.946053 216030 retry.go:31] will retry after 315.867414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.946061 216030 retry.go:31] will retry after 219.565848ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:01.988491 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:01.988520 216030 retry.go:31] will retry after 309.910603ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:02.166270 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1204 23:59:02.263109 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1204 23:59:02.298783 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1204 23:59:02.358215 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:02.358257 216030 retry.go:31] will retry after 361.560544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:02.432499 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1204 23:59:02.571825 216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": dial tcp 192.168.76.2:8443: connect: connection refused
I1204 23:59:02.720033 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1204 23:59:02.735767 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:02.735801 216030 retry.go:31] will retry after 418.341804ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:02.812590 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:02.812626 216030 retry.go:31] will retry after 488.130366ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:02.828249 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:02.828285 216030 retry.go:31] will retry after 310.105415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:02.880346 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:02.880374 216030 retry.go:31] will retry after 770.762768ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:03.139297 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1204 23:59:03.154559 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1204 23:59:03.300886 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1204 23:59:03.307845 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:03.307879 216030 retry.go:31] will retry after 800.137456ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:03.373919 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:03.373952 216030 retry.go:31] will retry after 1.120090819s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:03.441590 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:03.441626 216030 retry.go:31] will retry after 625.533972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:03.651608 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1204 23:59:03.811647 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:03.811681 216030 retry.go:31] will retry after 943.564938ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:04.068147 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1204 23:59:04.108975 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1204 23:59:04.173123 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:04.173162 216030 retry.go:31] will retry after 1.270498363s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:04.243218 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:04.243255 216030 retry.go:31] will retry after 1.522887692s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:04.494730 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1204 23:59:04.572350 216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": dial tcp 192.168.76.2:8443: connect: connection refused
W1204 23:59:04.597940 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:04.598023 216030 retry.go:31] will retry after 1.26879485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:04.756242 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1204 23:59:04.877238 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:04.877264 216030 retry.go:31] will retry after 2.106404771s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:05.444487 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1204 23:59:05.544370 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:05.544407 216030 retry.go:31] will retry after 2.39631732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:05.767291 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1204 23:59:05.867090 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1204 23:59:05.867237 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:05.867265 216030 retry.go:31] will retry after 1.509553348s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1204 23:59:05.975838 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:05.975880 216030 retry.go:31] will retry after 2.49774844s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:06.572434 216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": dial tcp 192.168.76.2:8443: connect: connection refused
I1204 23:59:06.983846 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1204 23:59:07.082591 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:07.082621 216030 retry.go:31] will retry after 1.712553314s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:07.377466 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1204 23:59:07.479578 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:07.479613 216030 retry.go:31] will retry after 2.677258788s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:07.941852 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1204 23:59:08.042298 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:08.042337 216030 retry.go:31] will retry after 2.646781732s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:08.474286 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1204 23:59:08.583037 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:08.583067 216030 retry.go:31] will retry after 1.540189467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:08.795392 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1204 23:59:08.986708 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:08.986740 216030 retry.go:31] will retry after 2.631574868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1204 23:59:09.072402 216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": dial tcp 192.168.76.2:8443: connect: connection refused
I1204 23:59:10.124154 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1204 23:59:10.157411 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1204 23:59:10.689511 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1204 23:59:11.619306 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1204 23:59:20.572791 216030 node_ready.go:53] error getting node "old-k8s-version-066167": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-066167": net/http: TLS handshake timeout
I1204 23:59:20.902135 216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.777935896s)
W1204 23:59:20.902177 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I1204 23:59:20.902197 216030 retry.go:31] will retry after 4.180913506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I1204 23:59:20.905945 216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.748501717s)
W1204 23:59:20.905977 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I1204 23:59:20.905992 216030 retry.go:31] will retry after 3.572493709s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I1204 23:59:21.161741 216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.472183353s)
W1204 23:59:21.161778 216030 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I1204 23:59:21.161797 216030 retry.go:31] will retry after 5.277949957s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I1204 23:59:21.952838 216030 node_ready.go:49] node "old-k8s-version-066167" has status "Ready":"True"
I1204 23:59:21.952862 216030 node_ready.go:38] duration metric: took 21.381445101s for node "old-k8s-version-066167" to be "Ready" ...
I1204 23:59:21.952873 216030 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1204 23:59:22.213039 216030 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-vb8kf" in "kube-system" namespace to be "Ready" ...
I1204 23:59:22.567414 216030 pod_ready.go:93] pod "coredns-74ff55c5b-vb8kf" in "kube-system" namespace has status "Ready":"True"
I1204 23:59:22.567489 216030 pod_ready.go:82] duration metric: took 354.337475ms for pod "coredns-74ff55c5b-vb8kf" in "kube-system" namespace to be "Ready" ...
I1204 23:59:22.567517 216030 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1204 23:59:22.610199 216030 pod_ready.go:93] pod "etcd-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
I1204 23:59:22.610268 216030 pod_ready.go:82] duration metric: took 42.731173ms for pod "etcd-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1204 23:59:22.610309 216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1204 23:59:23.066631 216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.447288087s)
I1204 23:59:24.479125 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1204 23:59:24.626687 216030 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:25.083957 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1204 23:59:26.150754 216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.671589784s)
I1204 23:59:26.333320 216030 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.249315574s)
I1204 23:59:26.333419 216030 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-066167"
I1204 23:59:26.440433 216030 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1204 23:59:26.925999 216030 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-066167 addons enable metrics-server
I1204 23:59:26.928483 216030 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
I1204 23:59:26.930938 216030 addons.go:510] duration metric: took 26.722286953s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
I1204 23:59:27.116187 216030 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:29.117090 216030 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:31.136563 216030 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:32.116916 216030 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
I1204 23:59:32.116941 216030 pod_ready.go:82] duration metric: took 9.506606364s for pod "kube-apiserver-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1204 23:59:32.116955 216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1204 23:59:34.123385 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:36.124414 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:38.622468 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:41.134741 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:43.628217 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:46.129268 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:48.623824 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:51.129971 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:53.622941 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:55.623333 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1204 23:59:58.123609 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:00.155979 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:02.624179 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:05.124373 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:07.625365 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:10.124525 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:12.623285 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:15.124225 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:17.124364 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:19.124471 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:21.124677 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:23.624178 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:26.123881 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:28.141044 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:30.641472 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:33.124970 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:35.125567 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:37.125828 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:39.622706 216030 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:39.622731 216030 pod_ready.go:82] duration metric: took 1m7.50576737s for pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1205 00:00:39.622744 216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xh97b" in "kube-system" namespace to be "Ready" ...
I1205 00:00:39.627598 216030 pod_ready.go:93] pod "kube-proxy-xh97b" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:39.627663 216030 pod_ready.go:82] duration metric: took 4.909057ms for pod "kube-proxy-xh97b" in "kube-system" namespace to be "Ready" ...
I1205 00:00:39.627682 216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1205 00:00:41.635075 216030 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:44.133262 216030 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:45.634685 216030 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:45.634754 216030 pod_ready.go:82] duration metric: took 6.007062956s for pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1205 00:00:45.634781 216030 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace to be "Ready" ...
I1205 00:00:47.641160 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:50.142040 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:52.640397 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:54.641624 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:57.141636 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:59.640966 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:01.641368 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:04.141819 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:06.641245 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:08.643778 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:11.142085 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:13.142210 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:15.143248 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:17.640366 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:19.642863 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:22.141401 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:24.141731 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:26.640254 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:29.141453 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:31.640747 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:33.640815 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:35.640860 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:37.641357 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:40.141576 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:42.142551 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:44.640790 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:46.640978 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:49.140930 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:51.640575 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:54.141681 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:56.640948 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:58.641251 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:01.140947 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:03.141906 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:05.641253 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:08.140771 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:10.141503 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:12.141977 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:14.640789 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:16.640823 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:18.641073 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:20.641191 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:22.641262 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:25.142092 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:27.142352 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:29.640668 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:32.144704 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:34.641267 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:37.141776 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:39.640880 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:42.143365 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:44.641367 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:47.141280 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:49.141788 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:51.141822 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:53.186734 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:55.641386 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:58.141377 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:00.190535 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:02.640575 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:04.641218 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:07.141448 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:09.142518 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:11.646299 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:14.140064 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:16.141711 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:18.640395 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:20.641321 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:23.141469 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:25.142104 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:27.641721 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:30.141599 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:32.640894 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:34.641205 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:37.141279 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:39.141499 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:41.141843 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:43.142457 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:45.642402 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:48.141074 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:50.640840 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:52.640941 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:55.142516 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:57.641042 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:00.258272 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:02.640707 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:04.640786 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:06.640980 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:08.641054 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:11.146089 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:13.640923 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:16.141477 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:18.641364 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:21.154913 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:23.640479 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:25.641079 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:27.642694 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:30.141328 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:32.142061 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:34.646681 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:37.141273 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:39.142582 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:41.641272 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:44.154672 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:45.641935 216030 pod_ready.go:82] duration metric: took 4m0.007127886s for pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace to be "Ready" ...
E1205 00:04:45.641961 216030 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1205 00:04:45.641970 216030 pod_ready.go:39] duration metric: took 5m23.689087349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1205 00:04:45.641984 216030 api_server.go:52] waiting for apiserver process to appear ...
I1205 00:04:45.642014 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1205 00:04:45.642080 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1205 00:04:45.701396 216030 cri.go:89] found id: "d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
I1205 00:04:45.701417 216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1205 00:04:45.701422 216030 cri.go:89] found id: ""
I1205 00:04:45.701428 216030 logs.go:282] 2 containers: [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8]
I1205 00:04:45.701487 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.706274 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.709870 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1205 00:04:45.709950 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1205 00:04:45.752726 216030 cri.go:89] found id: "d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
I1205 00:04:45.752759 216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1205 00:04:45.752764 216030 cri.go:89] found id: ""
I1205 00:04:45.752771 216030 logs.go:282] 2 containers: [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e]
I1205 00:04:45.752844 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.756595 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.759984 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1205 00:04:45.760054 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1205 00:04:45.802699 216030 cri.go:89] found id: "18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
I1205 00:04:45.802722 216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1205 00:04:45.802733 216030 cri.go:89] found id: ""
I1205 00:04:45.802741 216030 logs.go:282] 2 containers: [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c]
I1205 00:04:45.802798 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.806565 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.810357 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1205 00:04:45.810434 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1205 00:04:45.853797 216030 cri.go:89] found id: "4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
I1205 00:04:45.853818 216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1205 00:04:45.853823 216030 cri.go:89] found id: ""
I1205 00:04:45.853832 216030 logs.go:282] 2 containers: [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0]
I1205 00:04:45.853889 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.857263 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.862164 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1205 00:04:45.862243 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1205 00:04:45.902320 216030 cri.go:89] found id: "355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
I1205 00:04:45.902409 216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1205 00:04:45.902423 216030 cri.go:89] found id: ""
I1205 00:04:45.902431 216030 logs.go:282] 2 containers: [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88]
I1205 00:04:45.902501 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.906129 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.909489 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1205 00:04:45.909590 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1205 00:04:45.951353 216030 cri.go:89] found id: "0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
I1205 00:04:45.951376 216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1205 00:04:45.951381 216030 cri.go:89] found id: ""
I1205 00:04:45.951388 216030 logs.go:282] 2 containers: [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15]
I1205 00:04:45.951449 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.955123 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.958548 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1205 00:04:45.958621 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1205 00:04:46.013456 216030 cri.go:89] found id: "9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
I1205 00:04:46.013484 216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1205 00:04:46.013489 216030 cri.go:89] found id: ""
I1205 00:04:46.013497 216030 logs.go:282] 2 containers: [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d]
I1205 00:04:46.013620 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.018166 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.022058 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1205 00:04:46.022188 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1205 00:04:46.071154 216030 cri.go:89] found id: "eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
I1205 00:04:46.071186 216030 cri.go:89] found id: ""
I1205 00:04:46.071195 216030 logs.go:282] 1 containers: [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e]
I1205 00:04:46.071278 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.075279 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1205 00:04:46.075401 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1205 00:04:46.115487 216030 cri.go:89] found id: "61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
I1205 00:04:46.115560 216030 cri.go:89] found id: "cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
I1205 00:04:46.115580 216030 cri.go:89] found id: ""
I1205 00:04:46.115593 216030 logs.go:282] 2 containers: [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf]
I1205 00:04:46.115669 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.119363 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.122924 216030 logs.go:123] Gathering logs for coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] ...
I1205 00:04:46.122956 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1205 00:04:46.164473 216030 logs.go:123] Gathering logs for storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] ...
I1205 00:04:46.164503 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
I1205 00:04:46.219238 216030 logs.go:123] Gathering logs for describe nodes ...
I1205 00:04:46.219270 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1205 00:04:46.367441 216030 logs.go:123] Gathering logs for kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] ...
I1205 00:04:46.367470 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
I1205 00:04:46.406779 216030 logs.go:123] Gathering logs for kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] ...
I1205 00:04:46.406805 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
I1205 00:04:46.454765 216030 logs.go:123] Gathering logs for kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] ...
I1205 00:04:46.454792 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1205 00:04:46.498510 216030 logs.go:123] Gathering logs for kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] ...
I1205 00:04:46.498538 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
I1205 00:04:46.537447 216030 logs.go:123] Gathering logs for containerd ...
I1205 00:04:46.537476 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1205 00:04:46.617148 216030 logs.go:123] Gathering logs for etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] ...
I1205 00:04:46.617196 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
I1205 00:04:46.667834 216030 logs.go:123] Gathering logs for kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] ...
I1205 00:04:46.667985 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
I1205 00:04:46.732274 216030 logs.go:123] Gathering logs for kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] ...
I1205 00:04:46.732303 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1205 00:04:46.792624 216030 logs.go:123] Gathering logs for kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] ...
I1205 00:04:46.792656 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1205 00:04:46.830707 216030 logs.go:123] Gathering logs for storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] ...
I1205 00:04:46.830736 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
I1205 00:04:46.875737 216030 logs.go:123] Gathering logs for kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] ...
I1205 00:04:46.875769 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1205 00:04:46.960343 216030 logs.go:123] Gathering logs for dmesg ...
I1205 00:04:46.960376 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1205 00:04:46.978879 216030 logs.go:123] Gathering logs for kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] ...
I1205 00:04:46.978908 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
I1205 00:04:47.043184 216030 logs.go:123] Gathering logs for etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] ...
I1205 00:04:47.043220 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1205 00:04:47.095108 216030 logs.go:123] Gathering logs for coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] ...
I1205 00:04:47.095137 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
I1205 00:04:47.138073 216030 logs.go:123] Gathering logs for kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] ...
I1205 00:04:47.138112 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
I1205 00:04:47.200917 216030 logs.go:123] Gathering logs for kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] ...
I1205 00:04:47.200959 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1205 00:04:47.290017 216030 logs.go:123] Gathering logs for container status ...
I1205 00:04:47.290077 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1205 00:04:47.355835 216030 logs.go:123] Gathering logs for kubelet ...
I1205 00:04:47.355861 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1205 00:04:47.415957 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029161 664 reflector.go:138] object-"kube-system"/"kindnet-token-rrxv8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rrxv8" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:47.416229 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029640 664 reflector.go:138] object-"default"/"default-token-6q5g5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6q5g5" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:47.416462 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029889 664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7b2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7b2f" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:47.422607 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:23 old-k8s-version-066167 kubelet[664]: E1204 23:59:23.455493 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.422894 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:24 old-k8s-version-066167 kubelet[664]: E1204 23:59:24.408536 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.425873 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:36 old-k8s-version-066167 kubelet[664]: E1204 23:59:36.194820 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.427977 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:46 old-k8s-version-066167 kubelet[664]: E1204 23:59:46.769553 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.428166 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.173711 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.428495 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.774292 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.429164 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.679719 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.429606 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.812800 664 pod_workers.go:191] Error syncing pod 81fe575b-ab3c-49a1-b013-84ec8c0bea1c ("storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"
W1205 00:04:47.432365 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:00 old-k8s-version-066167 kubelet[664]: E1205 00:00:00.315343 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.432950 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:08 old-k8s-version-066167 kubelet[664]: E1205 00:00:08.854222 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.433315 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:11 old-k8s-version-066167 kubelet[664]: E1205 00:00:11.169368 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.433645 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:15 old-k8s-version-066167 kubelet[664]: E1205 00:00:15.678257 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.433831 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:26 old-k8s-version-066167 kubelet[664]: E1205 00:00:26.169392 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.434156 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:27 old-k8s-version-066167 kubelet[664]: E1205 00:00:27.168949 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.434742 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:39 old-k8s-version-066167 kubelet[664]: E1205 00:00:39.964267 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.437168 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:41 old-k8s-version-066167 kubelet[664]: E1205 00:00:41.177237 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.437499 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:45 old-k8s-version-066167 kubelet[664]: E1205 00:00:45.677813 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.437686 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:54 old-k8s-version-066167 kubelet[664]: E1205 00:00:54.170310 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.438017 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:57 old-k8s-version-066167 kubelet[664]: E1205 00:00:57.168714 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.438200 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:07 old-k8s-version-066167 kubelet[664]: E1205 00:01:07.169610 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.438538 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:08 old-k8s-version-066167 kubelet[664]: E1205 00:01:08.169137 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.439120 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.080372 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.439303 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.172882 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.439631 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:25 old-k8s-version-066167 kubelet[664]: E1205 00:01:25.677810 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.439814 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:36 old-k8s-version-066167 kubelet[664]: E1205 00:01:36.169551 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.440143 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:39 old-k8s-version-066167 kubelet[664]: E1205 00:01:39.170525 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.440328 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:50 old-k8s-version-066167 kubelet[664]: E1205 00:01:50.170399 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.440657 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:52 old-k8s-version-066167 kubelet[664]: E1205 00:01:52.168832 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.441030 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:03 old-k8s-version-066167 kubelet[664]: E1205 00:02:03.168796 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.443463 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:04 old-k8s-version-066167 kubelet[664]: E1205 00:02:04.179849 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.443782 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.169877 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.443980 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.170577 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.444164 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:27 old-k8s-version-066167 kubelet[664]: E1205 00:02:27.169252 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.444490 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:30 old-k8s-version-066167 kubelet[664]: E1205 00:02:30.169307 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.444674 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:40 old-k8s-version-066167 kubelet[664]: E1205 00:02:40.172577 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.445266 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:43 old-k8s-version-066167 kubelet[664]: E1205 00:02:43.346410 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.445596 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:45 old-k8s-version-066167 kubelet[664]: E1205 00:02:45.677951 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.445781 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:53 old-k8s-version-066167 kubelet[664]: E1205 00:02:53.169354 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.446106 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:58 old-k8s-version-066167 kubelet[664]: E1205 00:02:58.169560 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.446289 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:08 old-k8s-version-066167 kubelet[664]: E1205 00:03:08.172186 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.446622 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:12 old-k8s-version-066167 kubelet[664]: E1205 00:03:12.169424 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.446806 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:21 old-k8s-version-066167 kubelet[664]: E1205 00:03:21.169226 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.447136 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:23 old-k8s-version-066167 kubelet[664]: E1205 00:03:23.168967 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.447319 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:34 old-k8s-version-066167 kubelet[664]: E1205 00:03:34.173087 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.447646 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.447972 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.448154 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.448468 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.448666 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.448992 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.449185 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.449511 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.449719 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.450052 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.450238 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1205 00:04:47.450255 216030 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:47.450266 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1205 00:04:47.450326 216030 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1205 00:04:47.450338 216030 out.go:270] Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.450346 216030 out.go:270] Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.450353 216030 out.go:270] Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.450362 216030 out.go:270] Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.450370 216030 out.go:270] Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1205 00:04:47.450378 216030 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:47.450384 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 00:04:57.451637 216030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 00:04:57.463556 216030 api_server.go:72] duration metric: took 5m57.25534682s to wait for apiserver process to appear ...
I1205 00:04:57.463582 216030 api_server.go:88] waiting for apiserver healthz status ...
I1205 00:04:57.463617 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1205 00:04:57.463679 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1205 00:04:57.502613 216030 cri.go:89] found id: "d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
I1205 00:04:57.502634 216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1205 00:04:57.502639 216030 cri.go:89] found id: ""
I1205 00:04:57.502646 216030 logs.go:282] 2 containers: [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8]
I1205 00:04:57.502706 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.506578 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.510329 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1205 00:04:57.510403 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1205 00:04:57.549412 216030 cri.go:89] found id: "d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
I1205 00:04:57.549434 216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1205 00:04:57.549439 216030 cri.go:89] found id: ""
I1205 00:04:57.549446 216030 logs.go:282] 2 containers: [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e]
I1205 00:04:57.549522 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.553176 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.556561 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1205 00:04:57.556630 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1205 00:04:57.606322 216030 cri.go:89] found id: "18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
I1205 00:04:57.606344 216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1205 00:04:57.606349 216030 cri.go:89] found id: ""
I1205 00:04:57.606356 216030 logs.go:282] 2 containers: [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c]
I1205 00:04:57.606414 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.610546 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.614234 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1205 00:04:57.614302 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1205 00:04:57.657522 216030 cri.go:89] found id: "4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
I1205 00:04:57.657543 216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1205 00:04:57.657549 216030 cri.go:89] found id: ""
I1205 00:04:57.657556 216030 logs.go:282] 2 containers: [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0]
I1205 00:04:57.657619 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.661379 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.664752 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1205 00:04:57.664830 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1205 00:04:57.712770 216030 cri.go:89] found id: "355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
I1205 00:04:57.712861 216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1205 00:04:57.712880 216030 cri.go:89] found id: ""
I1205 00:04:57.712898 216030 logs.go:282] 2 containers: [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88]
I1205 00:04:57.712996 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.717580 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.721738 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1205 00:04:57.721819 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1205 00:04:57.759280 216030 cri.go:89] found id: "0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
I1205 00:04:57.759302 216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1205 00:04:57.759307 216030 cri.go:89] found id: ""
I1205 00:04:57.759314 216030 logs.go:282] 2 containers: [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15]
I1205 00:04:57.759371 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.763240 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.766739 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1205 00:04:57.766823 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1205 00:04:57.804341 216030 cri.go:89] found id: "9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
I1205 00:04:57.804366 216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1205 00:04:57.804372 216030 cri.go:89] found id: ""
I1205 00:04:57.804379 216030 logs.go:282] 2 containers: [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d]
I1205 00:04:57.804439 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.808307 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.811971 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1205 00:04:57.812044 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1205 00:04:57.865535 216030 cri.go:89] found id: "61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
I1205 00:04:57.865556 216030 cri.go:89] found id: "cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
I1205 00:04:57.865561 216030 cri.go:89] found id: ""
I1205 00:04:57.865568 216030 logs.go:282] 2 containers: [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf]
I1205 00:04:57.865627 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.869504 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.872895 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1205 00:04:57.873022 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1205 00:04:57.913425 216030 cri.go:89] found id: "eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
I1205 00:04:57.913449 216030 cri.go:89] found id: ""
I1205 00:04:57.913463 216030 logs.go:282] 1 containers: [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e]
I1205 00:04:57.913526 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.917503 216030 logs.go:123] Gathering logs for kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] ...
I1205 00:04:57.917529 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
I1205 00:04:57.959718 216030 logs.go:123] Gathering logs for kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] ...
I1205 00:04:57.959742 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
I1205 00:04:58.030401 216030 logs.go:123] Gathering logs for storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] ...
I1205 00:04:58.030436 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
I1205 00:04:58.089905 216030 logs.go:123] Gathering logs for storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] ...
I1205 00:04:58.089933 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
I1205 00:04:58.129773 216030 logs.go:123] Gathering logs for etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] ...
I1205 00:04:58.129861 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
I1205 00:04:58.170834 216030 logs.go:123] Gathering logs for coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] ...
I1205 00:04:58.170863 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1205 00:04:58.217420 216030 logs.go:123] Gathering logs for kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] ...
I1205 00:04:58.217449 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1205 00:04:58.264707 216030 logs.go:123] Gathering logs for kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] ...
I1205 00:04:58.264735 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1205 00:04:58.314661 216030 logs.go:123] Gathering logs for kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] ...
I1205 00:04:58.314686 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1205 00:04:58.372507 216030 logs.go:123] Gathering logs for kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] ...
I1205 00:04:58.372541 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1205 00:04:58.414881 216030 logs.go:123] Gathering logs for kubelet ...
I1205 00:04:58.414910 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1205 00:04:58.477133 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029161 664 reflector.go:138] object-"kube-system"/"kindnet-token-rrxv8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rrxv8" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:58.477409 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029640 664 reflector.go:138] object-"default"/"default-token-6q5g5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6q5g5" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:58.477645 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029889 664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7b2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7b2f" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:58.483742 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:23 old-k8s-version-066167 kubelet[664]: E1204 23:59:23.455493 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.484031 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:24 old-k8s-version-066167 kubelet[664]: E1204 23:59:24.408536 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.486976 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:36 old-k8s-version-066167 kubelet[664]: E1204 23:59:36.194820 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.489032 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:46 old-k8s-version-066167 kubelet[664]: E1204 23:59:46.769553 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.489223 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.173711 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.489549 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.774292 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.490248 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.679719 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.490681 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.812800 664 pod_workers.go:191] Error syncing pod 81fe575b-ab3c-49a1-b013-84ec8c0bea1c ("storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"
W1205 00:04:58.493488 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:00 old-k8s-version-066167 kubelet[664]: E1205 00:00:00.315343 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.494069 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:08 old-k8s-version-066167 kubelet[664]: E1205 00:00:08.854222 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.494382 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:11 old-k8s-version-066167 kubelet[664]: E1205 00:00:11.169368 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.494707 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:15 old-k8s-version-066167 kubelet[664]: E1205 00:00:15.678257 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.494888 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:26 old-k8s-version-066167 kubelet[664]: E1205 00:00:26.169392 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.495213 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:27 old-k8s-version-066167 kubelet[664]: E1205 00:00:27.168949 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.495824 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:39 old-k8s-version-066167 kubelet[664]: E1205 00:00:39.964267 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.498358 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:41 old-k8s-version-066167 kubelet[664]: E1205 00:00:41.177237 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.498692 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:45 old-k8s-version-066167 kubelet[664]: E1205 00:00:45.677813 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.498876 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:54 old-k8s-version-066167 kubelet[664]: E1205 00:00:54.170310 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.499205 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:57 old-k8s-version-066167 kubelet[664]: E1205 00:00:57.168714 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.499388 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:07 old-k8s-version-066167 kubelet[664]: E1205 00:01:07.169610 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.499711 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:08 old-k8s-version-066167 kubelet[664]: E1205 00:01:08.169137 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.500296 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.080372 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.500479 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.172882 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.500805 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:25 old-k8s-version-066167 kubelet[664]: E1205 00:01:25.677810 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.500988 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:36 old-k8s-version-066167 kubelet[664]: E1205 00:01:36.169551 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.501317 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:39 old-k8s-version-066167 kubelet[664]: E1205 00:01:39.170525 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.501505 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:50 old-k8s-version-066167 kubelet[664]: E1205 00:01:50.170399 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.501834 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:52 old-k8s-version-066167 kubelet[664]: E1205 00:01:52.168832 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.502157 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:03 old-k8s-version-066167 kubelet[664]: E1205 00:02:03.168796 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.504721 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:04 old-k8s-version-066167 kubelet[664]: E1205 00:02:04.179849 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.505045 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.169877 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.505250 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.170577 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.505435 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:27 old-k8s-version-066167 kubelet[664]: E1205 00:02:27.169252 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.505771 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:30 old-k8s-version-066167 kubelet[664]: E1205 00:02:30.169307 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.505960 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:40 old-k8s-version-066167 kubelet[664]: E1205 00:02:40.172577 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.506540 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:43 old-k8s-version-066167 kubelet[664]: E1205 00:02:43.346410 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.506865 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:45 old-k8s-version-066167 kubelet[664]: E1205 00:02:45.677951 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.507048 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:53 old-k8s-version-066167 kubelet[664]: E1205 00:02:53.169354 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.507371 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:58 old-k8s-version-066167 kubelet[664]: E1205 00:02:58.169560 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.507552 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:08 old-k8s-version-066167 kubelet[664]: E1205 00:03:08.172186 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.507880 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:12 old-k8s-version-066167 kubelet[664]: E1205 00:03:12.169424 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.508061 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:21 old-k8s-version-066167 kubelet[664]: E1205 00:03:21.169226 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.508388 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:23 old-k8s-version-066167 kubelet[664]: E1205 00:03:23.168967 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.508586 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:34 old-k8s-version-066167 kubelet[664]: E1205 00:03:34.173087 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.508910 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.509240 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.509423 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.509780 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.509978 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.510307 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.510488 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.510817 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.511000 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.511326 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.511509 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.511836 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.514252 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I1205 00:04:58.514266 216030 logs.go:123] Gathering logs for dmesg ...
I1205 00:04:58.514280 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1205 00:04:58.534466 216030 logs.go:123] Gathering logs for describe nodes ...
I1205 00:04:58.534492 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1205 00:04:58.682096 216030 logs.go:123] Gathering logs for coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] ...
I1205 00:04:58.682123 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
I1205 00:04:58.725405 216030 logs.go:123] Gathering logs for kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] ...
I1205 00:04:58.725431 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
I1205 00:04:58.773720 216030 logs.go:123] Gathering logs for containerd ...
I1205 00:04:58.773748 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1205 00:04:58.836186 216030 logs.go:123] Gathering logs for kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] ...
I1205 00:04:58.836222 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
I1205 00:04:58.899828 216030 logs.go:123] Gathering logs for container status ...
I1205 00:04:58.899854 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1205 00:04:58.942941 216030 logs.go:123] Gathering logs for kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] ...
I1205 00:04:58.942971 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
I1205 00:04:59.018047 216030 logs.go:123] Gathering logs for kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] ...
I1205 00:04:59.018114 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1205 00:04:59.103130 216030 logs.go:123] Gathering logs for etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] ...
I1205 00:04:59.103163 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1205 00:04:59.151511 216030 logs.go:123] Gathering logs for kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] ...
I1205 00:04:59.151539 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
I1205 00:04:59.192352 216030 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:59.192377 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1205 00:04:59.192481 216030 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1205 00:04:59.192494 216030 out.go:270] Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:59.192519 216030 out.go:270] Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:59.192537 216030 out.go:270] Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:59.192564 216030 out.go:270] Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:59.192574 216030 out.go:270] Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I1205 00:04:59.192585 216030 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:59.192591 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 00:05:09.194046 216030 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1205 00:05:09.205203 216030 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1205 00:05:09.208473 216030 out.go:201]
W1205 00:05:09.210824 216030 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1205 00:05:09.210861 216030 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W1205 00:05:09.210876 216030 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W1205 00:05:09.210882 216030 out.go:270] *
*
W1205 00:05:09.211748 216030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 00:05:09.215055 216030 out.go:201]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-066167 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-066167
helpers_test.go:235: (dbg) docker inspect old-k8s-version-066167:
-- stdout --
[
{
"Id": "b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581",
"Created": "2024-12-04T23:56:18.334273178Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 216226,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-12-04T23:58:52.621554643Z",
"FinishedAt": "2024-12-04T23:58:51.550436754Z"
},
"Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
"ResolvConfPath": "/var/lib/docker/containers/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581/hostname",
"HostsPath": "/var/lib/docker/containers/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581/hosts",
"LogPath": "/var/lib/docker/containers/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581/b95ea62abf924b4cd6666efeb76acc2a80cb97174b211345e87c225902203581-json.log",
"Name": "/old-k8s-version-066167",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-066167:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-066167",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/a707e44e8fb4f9f672acf72041ac8c97c7693e1d9d2e7fb11df32d3a76e7124d-init/diff:/var/lib/docker/overlay2/c12526196c20c242bf0c04aa29eed00ae00c2b2711c7a888146d1a43e3b60445/diff",
"MergedDir": "/var/lib/docker/overlay2/a707e44e8fb4f9f672acf72041ac8c97c7693e1d9d2e7fb11df32d3a76e7124d/merged",
"UpperDir": "/var/lib/docker/overlay2/a707e44e8fb4f9f672acf72041ac8c97c7693e1d9d2e7fb11df32d3a76e7124d/diff",
"WorkDir": "/var/lib/docker/overlay2/a707e44e8fb4f9f672acf72041ac8c97c7693e1d9d2e7fb11df32d3a76e7124d/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-066167",
"Source": "/var/lib/docker/volumes/old-k8s-version-066167/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-066167",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-066167",
"name.minikube.sigs.k8s.io": "old-k8s-version-066167",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7607567a027d24764f241e2ca0319de6a4e929a7935befeeb6cc1fc8e78d51dc",
"SandboxKey": "/var/run/docker/netns/7607567a027d",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33063"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33064"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33067"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33065"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33066"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-066167": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "38265aed5fbcc98804134ea94763ea6df8e2518dd3605389f6e3308899d8146d",
"EndpointID": "4be59e607421da4aea843213718b25cde4b684c9316193e117bd24c54ec92fe2",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-066167",
"b95ea62abf92"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-066167 -n old-k8s-version-066167
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-066167 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-066167 logs -n 25: (2.091934909s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| ssh | -p cilium-147448 sudo | cilium-147448 | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | |
| | containerd config dump | | | | | |
| ssh | -p cilium-147448 sudo | cilium-147448 | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | |
| | systemctl status crio --all | | | | | |
| | --full --no-pager | | | | | |
| ssh | -p cilium-147448 sudo | cilium-147448 | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | |
| | systemctl cat crio --no-pager | | | | | |
| ssh | -p cilium-147448 sudo find | cilium-147448 | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | |
| | /etc/crio -type f -exec sh -c | | | | | |
| | 'echo {}; cat {}' \; | | | | | |
| ssh | -p cilium-147448 sudo crio | cilium-147448 | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | |
| | config | | | | | |
| delete | -p cilium-147448 | cilium-147448 | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | 04 Dec 24 23:54 UTC |
| start | -p cert-expiration-688223 | cert-expiration-688223 | jenkins | v1.34.0 | 04 Dec 24 23:54 UTC | 04 Dec 24 23:55 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-932373 | force-systemd-env-932373 | jenkins | v1.34.0 | 04 Dec 24 23:55 UTC | 04 Dec 24 23:55 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-932373 | force-systemd-env-932373 | jenkins | v1.34.0 | 04 Dec 24 23:55 UTC | 04 Dec 24 23:55 UTC |
| start | -p cert-options-516338 | cert-options-516338 | jenkins | v1.34.0 | 04 Dec 24 23:55 UTC | 04 Dec 24 23:56 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-516338 ssh | cert-options-516338 | jenkins | v1.34.0 | 04 Dec 24 23:56 UTC | 04 Dec 24 23:56 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-516338 -- sudo | cert-options-516338 | jenkins | v1.34.0 | 04 Dec 24 23:56 UTC | 04 Dec 24 23:56 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-516338 | cert-options-516338 | jenkins | v1.34.0 | 04 Dec 24 23:56 UTC | 04 Dec 24 23:56 UTC |
| start | -p old-k8s-version-066167 | old-k8s-version-066167 | jenkins | v1.34.0 | 04 Dec 24 23:56 UTC | 04 Dec 24 23:58 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-688223 | cert-expiration-688223 | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| addons | enable metrics-server -p old-k8s-version-066167 | old-k8s-version-066167 | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-066167 | old-k8s-version-066167 | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
| | --alsologtostderr -v=3 | | | | | |
| delete | -p cert-expiration-688223 | cert-expiration-688223 | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
| start | -p no-preload-013030 | no-preload-013030 | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:59 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| addons | enable dashboard -p old-k8s-version-066167 | old-k8s-version-066167 | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | 04 Dec 24 23:58 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-066167 | old-k8s-version-066167 | jenkins | v1.34.0 | 04 Dec 24 23:58 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-013030 | no-preload-013030 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-013030 | no-preload-013030 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-013030 | no-preload-013030 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | 05 Dec 24 00:00 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-013030 | no-preload-013030 | jenkins | v1.34.0 | 05 Dec 24 00:00 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/12/05 00:00:22
Running on machine: ip-172-31-24-2
Binary: Built with gc go1.23.2 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1205 00:00:22.771753 221677 out.go:345] Setting OutFile to fd 1 ...
I1205 00:00:22.772024 221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 00:00:22.772055 221677 out.go:358] Setting ErrFile to fd 2...
I1205 00:00:22.772088 221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 00:00:22.772457 221677 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-2283/.minikube/bin
I1205 00:00:22.773042 221677 out.go:352] Setting JSON to false
I1205 00:00:22.774794 221677 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6173,"bootTime":1733350650,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
I1205 00:00:22.774913 221677 start.go:139] virtualization:
I1205 00:00:22.778357 221677 out.go:177] * [no-preload-013030] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1205 00:00:22.781982 221677 out.go:177] - MINIKUBE_LOCATION=20045
I1205 00:00:22.782160 221677 notify.go:220] Checking for updates...
I1205 00:00:22.787238 221677 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1205 00:00:22.789958 221677 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20045-2283/kubeconfig
I1205 00:00:22.792620 221677 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-2283/.minikube
I1205 00:00:22.795369 221677 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1205 00:00:22.798053 221677 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1205 00:00:22.801201 221677 config.go:182] Loaded profile config "no-preload-013030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1205 00:00:22.801755 221677 driver.go:394] Setting default libvirt URI to qemu:///system
I1205 00:00:22.835793 221677 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1205 00:00:22.835968 221677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 00:00:22.906416 221677 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-05 00:00:22.896670658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1205 00:00:22.906524 221677 docker.go:318] overlay module found
I1205 00:00:22.909298 221677 out.go:177] * Using the docker driver based on existing profile
I1205 00:00:22.911873 221677 start.go:297] selected driver: docker
I1205 00:00:22.911892 221677 start.go:901] validating driver "docker" against &{Name:no-preload-013030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 00:00:22.911987 221677 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1205 00:00:22.912738 221677 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1205 00:00:22.981677 221677 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-05 00:00:22.968050911 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1205 00:00:22.982092 221677 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1205 00:00:22.982125 221677 cni.go:84] Creating CNI manager for ""
I1205 00:00:22.982169 221677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1205 00:00:22.982215 221677 start.go:340] cluster config:
{Name:no-preload-013030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 00:00:22.986916 221677 out.go:177] * Starting "no-preload-013030" primary control-plane node in "no-preload-013030" cluster
I1205 00:00:22.989768 221677 cache.go:121] Beginning downloading kic base image for docker with containerd
I1205 00:00:22.992479 221677 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
I1205 00:00:22.995197 221677 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1205 00:00:22.995283 221677 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
I1205 00:00:22.995354 221677 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/config.json ...
I1205 00:00:22.995663 221677 cache.go:107] acquiring lock: {Name:mk9da510fc959c7758b67ff4efdc922f3d1213ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.995750 221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1205 00:00:22.995769 221677 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 111.594µs
I1205 00:00:22.995778 221677 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1205 00:00:22.995795 221677 cache.go:107] acquiring lock: {Name:mk90b2210b9aa218ced54e9ad59b1559b758ea50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.995832 221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
I1205 00:00:22.995841 221677 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 47.809µs
I1205 00:00:22.995847 221677 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
I1205 00:00:22.995857 221677 cache.go:107] acquiring lock: {Name:mk824b140991ed1d076f69c25b5d723578c5bec8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.995885 221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
I1205 00:00:22.995895 221677 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 39.17µs
I1205 00:00:22.995902 221677 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
I1205 00:00:22.995912 221677 cache.go:107] acquiring lock: {Name:mkef398d006b259cd437f7ff4d09d913391bb913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.995939 221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
I1205 00:00:22.995950 221677 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 38.76µs
I1205 00:00:22.995963 221677 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
I1205 00:00:22.995976 221677 cache.go:107] acquiring lock: {Name:mkfa105860076730031a80b15339e0db74389978 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.996007 221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
I1205 00:00:22.996017 221677 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 41.722µs
I1205 00:00:22.996023 221677 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
I1205 00:00:22.996034 221677 cache.go:107] acquiring lock: {Name:mk4fff236731e18fbfdb75157a24d79a08ae90e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.996064 221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
I1205 00:00:22.996073 221677 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 40.343µs
I1205 00:00:22.996079 221677 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
I1205 00:00:22.996099 221677 cache.go:107] acquiring lock: {Name:mka5df1fb95f4640c2fcb4dd5c6f811b518cfd11 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.996130 221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
I1205 00:00:22.996139 221677 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 41.697µs
I1205 00:00:22.996145 221677 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
I1205 00:00:22.996154 221677 cache.go:107] acquiring lock: {Name:mkc76832b9384f9aff33c7cfc2d625069b4bd563 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.996186 221677 cache.go:115] /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
I1205 00:00:22.996194 221677 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 41.484µs
I1205 00:00:22.996200 221677 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/20045-2283/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
I1205 00:00:22.996206 221677 cache.go:87] Successfully saved all images to host disk.
I1205 00:00:23.024680 221677 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
I1205 00:00:23.024707 221677 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
I1205 00:00:23.024725 221677 cache.go:194] Successfully downloaded all kic artifacts
I1205 00:00:23.024749 221677 start.go:360] acquireMachinesLock for no-preload-013030: {Name:mkf3466c8e736c81de5b2facb9709787c162d97b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:23.024913 221677 start.go:364] duration metric: took 137.259µs to acquireMachinesLock for "no-preload-013030"
I1205 00:00:23.024959 221677 start.go:96] Skipping create...Using existing machine configuration
I1205 00:00:23.024967 221677 fix.go:54] fixHost starting:
I1205 00:00:23.025323 221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
I1205 00:00:23.043868 221677 fix.go:112] recreateIfNeeded on no-preload-013030: state=Stopped err=<nil>
W1205 00:00:23.043909 221677 fix.go:138] unexpected machine state, will restart: <nil>
I1205 00:00:23.048784 221677 out.go:177] * Restarting existing docker container for "no-preload-013030" ...
I1205 00:00:23.624178 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:26.123881 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:23.051562 221677 cli_runner.go:164] Run: docker start no-preload-013030
I1205 00:00:23.380258 221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
I1205 00:00:23.405882 221677 kic.go:430] container "no-preload-013030" state is running.
I1205 00:00:23.406280 221677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-013030
I1205 00:00:23.434868 221677 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/config.json ...
I1205 00:00:23.435095 221677 machine.go:93] provisionDockerMachine start ...
I1205 00:00:23.435152 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:23.463531 221677 main.go:141] libmachine: Using SSH client type: native
I1205 00:00:23.463789 221677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I1205 00:00:23.463798 221677 main.go:141] libmachine: About to run SSH command:
hostname
I1205 00:00:23.465070 221677 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53028->127.0.0.1:33068: read: connection reset by peer
I1205 00:00:26.593262 221677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-013030
I1205 00:00:26.593297 221677 ubuntu.go:169] provisioning hostname "no-preload-013030"
I1205 00:00:26.593359 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:26.611479 221677 main.go:141] libmachine: Using SSH client type: native
I1205 00:00:26.611725 221677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I1205 00:00:26.611737 221677 main.go:141] libmachine: About to run SSH command:
sudo hostname no-preload-013030 && echo "no-preload-013030" | sudo tee /etc/hostname
I1205 00:00:26.751962 221677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-013030
I1205 00:00:26.752060 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:26.768780 221677 main.go:141] libmachine: Using SSH client type: native
I1205 00:00:26.769029 221677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415ef0] 0x418730 <nil> [] 0s} 127.0.0.1 33068 <nil> <nil>}
I1205 00:00:26.769051 221677 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sno-preload-013030' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-013030/g' /etc/hosts;
else
echo '127.0.1.1 no-preload-013030' | sudo tee -a /etc/hosts;
fi
fi
I1205 00:00:26.898282 221677 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1205 00:00:26.898375 221677 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20045-2283/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-2283/.minikube}
I1205 00:00:26.898410 221677 ubuntu.go:177] setting up certificates
I1205 00:00:26.898454 221677 provision.go:84] configureAuth start
I1205 00:00:26.898563 221677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-013030
I1205 00:00:26.916483 221677 provision.go:143] copyHostCerts
I1205 00:00:26.916566 221677 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem, removing ...
I1205 00:00:26.916578 221677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem
I1205 00:00:26.916658 221677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/ca.pem (1082 bytes)
I1205 00:00:26.916769 221677 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem, removing ...
I1205 00:00:26.916774 221677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem
I1205 00:00:26.916800 221677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/cert.pem (1123 bytes)
I1205 00:00:26.916853 221677 exec_runner.go:144] found /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem, removing ...
I1205 00:00:26.916857 221677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem
I1205 00:00:26.916880 221677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-2283/.minikube/key.pem (1679 bytes)
I1205 00:00:26.916927 221677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem org=jenkins.no-preload-013030 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-013030]
I1205 00:00:27.063684 221677 provision.go:177] copyRemoteCerts
I1205 00:00:27.063761 221677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1205 00:00:27.063803 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:27.081682 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:27.181983 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1205 00:00:27.207152 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I1205 00:00:27.231598 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I1205 00:00:27.256704 221677 provision.go:87] duration metric: took 358.220423ms to configureAuth
I1205 00:00:27.256781 221677 ubuntu.go:193] setting minikube options for container-runtime
I1205 00:00:27.256991 221677 config.go:182] Loaded profile config "no-preload-013030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1205 00:00:27.257006 221677 machine.go:96] duration metric: took 3.821903614s to provisionDockerMachine
I1205 00:00:27.257016 221677 start.go:293] postStartSetup for "no-preload-013030" (driver="docker")
I1205 00:00:27.257026 221677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1205 00:00:27.257077 221677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1205 00:00:27.257196 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:27.273570 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:27.362324 221677 ssh_runner.go:195] Run: cat /etc/os-release
I1205 00:00:27.365632 221677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1205 00:00:27.365681 221677 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1205 00:00:27.365708 221677 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1205 00:00:27.365721 221677 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1205 00:00:27.365732 221677 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-2283/.minikube/addons for local assets ...
I1205 00:00:27.365807 221677 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-2283/.minikube/files for local assets ...
I1205 00:00:27.365920 221677 filesync.go:149] local asset: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem -> 77362.pem in /etc/ssl/certs
I1205 00:00:27.366065 221677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1205 00:00:27.374680 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem --> /etc/ssl/certs/77362.pem (1708 bytes)
I1205 00:00:27.400024 221677 start.go:296] duration metric: took 142.993536ms for postStartSetup
I1205 00:00:27.400152 221677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1205 00:00:27.400201 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:27.416549 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:27.503274 221677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1205 00:00:27.508372 221677 fix.go:56] duration metric: took 4.483399982s for fixHost
I1205 00:00:27.508416 221677 start.go:83] releasing machines lock for "no-preload-013030", held for 4.483485805s
I1205 00:00:27.508502 221677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-013030
I1205 00:00:27.525451 221677 ssh_runner.go:195] Run: cat /version.json
I1205 00:00:27.525536 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:27.525623 221677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1205 00:00:27.525677 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:27.551809 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:27.555961 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:27.644526 221677 ssh_runner.go:195] Run: systemctl --version
I1205 00:00:27.787747 221677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1205 00:00:27.792224 221677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1205 00:00:27.810665 221677 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1205 00:00:27.810760 221677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1205 00:00:27.819727 221677 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1205 00:00:27.819758 221677 start.go:495] detecting cgroup driver to use...
I1205 00:00:27.819811 221677 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1205 00:00:27.819876 221677 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1205 00:00:27.833971 221677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1205 00:00:27.845925 221677 docker.go:217] disabling cri-docker service (if available) ...
I1205 00:00:27.846045 221677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1205 00:00:27.859493 221677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1205 00:00:27.871602 221677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1205 00:00:27.973945 221677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1205 00:00:28.083981 221677 docker.go:233] disabling docker service ...
I1205 00:00:28.084077 221677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1205 00:00:28.101680 221677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1205 00:00:28.116262 221677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1205 00:00:28.214291 221677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1205 00:00:28.307646 221677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1205 00:00:28.318962 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1205 00:00:28.336388 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1205 00:00:28.347035 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1205 00:00:28.357426 221677 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1205 00:00:28.357510 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1205 00:00:28.368184 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1205 00:00:28.378773 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1205 00:00:28.389054 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1205 00:00:28.398909 221677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1205 00:00:28.408434 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1205 00:00:28.418511 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1205 00:00:28.428426 221677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1205 00:00:28.439623 221677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1205 00:00:28.449360 221677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1205 00:00:28.458018 221677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1205 00:00:28.549008 221677 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1205 00:00:28.749668 221677 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1205 00:00:28.749786 221677 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1205 00:00:28.754237 221677 start.go:563] Will wait 60s for crictl version
I1205 00:00:28.754355 221677 ssh_runner.go:195] Run: which crictl
I1205 00:00:28.758057 221677 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1205 00:00:28.799691 221677 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1205 00:00:28.799781 221677 ssh_runner.go:195] Run: containerd --version
I1205 00:00:28.822981 221677 ssh_runner.go:195] Run: containerd --version
I1205 00:00:28.854936 221677 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
I1205 00:00:28.857684 221677 cli_runner.go:164] Run: docker network inspect no-preload-013030 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1205 00:00:28.872864 221677 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1205 00:00:28.876372 221677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1205 00:00:28.886902 221677 kubeadm.go:883] updating cluster {Name:no-preload-013030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1205 00:00:28.887053 221677 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1205 00:00:28.887110 221677 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 00:00:28.928079 221677 containerd.go:627] all images are preloaded for containerd runtime.
I1205 00:00:28.928104 221677 cache_images.go:84] Images are preloaded, skipping loading
I1205 00:00:28.928112 221677 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.2 containerd true true} ...
I1205 00:00:28.928215 221677 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-013030 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1205 00:00:28.928282 221677 ssh_runner.go:195] Run: sudo crictl info
I1205 00:00:28.971509 221677 cni.go:84] Creating CNI manager for ""
I1205 00:00:28.971582 221677 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1205 00:00:28.971607 221677 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1205 00:00:28.971660 221677 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-013030 NodeName:no-preload-013030 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1205 00:00:28.971844 221677 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "no-preload-013030"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1205 00:00:28.971967 221677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1205 00:00:28.984809 221677 binaries.go:44] Found k8s binaries, skipping transfer
I1205 00:00:28.984910 221677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1205 00:00:28.996457 221677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
I1205 00:00:29.016978 221677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1205 00:00:29.039571 221677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
I1205 00:00:29.059288 221677 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1205 00:00:29.063109 221677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1205 00:00:29.074544 221677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1205 00:00:29.171709 221677 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1205 00:00:29.187636 221677 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030 for IP: 192.168.85.2
I1205 00:00:29.187659 221677 certs.go:194] generating shared ca certs ...
I1205 00:00:29.187678 221677 certs.go:226] acquiring lock for ca certs: {Name:mk1d98569ca320b9ee7e00b709eb6c9a159130d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 00:00:29.187852 221677 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-2283/.minikube/ca.key
I1205 00:00:29.187909 221677 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.key
I1205 00:00:29.187921 221677 certs.go:256] generating profile certs ...
I1205 00:00:29.188024 221677 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/client.key
I1205 00:00:29.188103 221677 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/apiserver.key.8c251c27
I1205 00:00:29.188157 221677 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/proxy-client.key
I1205 00:00:29.188318 221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736.pem (1338 bytes)
W1205 00:00:29.188361 221677 certs.go:480] ignoring /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736_empty.pem, impossibly tiny 0 bytes
I1205 00:00:29.188373 221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca-key.pem (1675 bytes)
I1205 00:00:29.188404 221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/ca.pem (1082 bytes)
I1205 00:00:29.188436 221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/cert.pem (1123 bytes)
I1205 00:00:29.188469 221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/certs/key.pem (1679 bytes)
I1205 00:00:29.188520 221677 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem (1708 bytes)
I1205 00:00:29.189242 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1205 00:00:29.216435 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1205 00:00:29.241407 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1205 00:00:29.266772 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1205 00:00:29.291726 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
I1205 00:00:29.317210 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1205 00:00:29.345302 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1205 00:00:29.374962 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/profiles/no-preload-013030/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1205 00:00:29.418225 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/certs/7736.pem --> /usr/share/ca-certificates/7736.pem (1338 bytes)
I1205 00:00:29.445814 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/files/etc/ssl/certs/77362.pem --> /usr/share/ca-certificates/77362.pem (1708 bytes)
I1205 00:00:29.473483 221677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-2283/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1205 00:00:29.499424 221677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1205 00:00:29.527810 221677 ssh_runner.go:195] Run: openssl version
I1205 00:00:29.535753 221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7736.pem && ln -fs /usr/share/ca-certificates/7736.pem /etc/ssl/certs/7736.pem"
I1205 00:00:29.546677 221677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7736.pem
I1205 00:00:29.550834 221677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 4 23:19 /usr/share/ca-certificates/7736.pem
I1205 00:00:29.550907 221677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7736.pem
I1205 00:00:29.558496 221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7736.pem /etc/ssl/certs/51391683.0"
I1205 00:00:29.568003 221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77362.pem && ln -fs /usr/share/ca-certificates/77362.pem /etc/ssl/certs/77362.pem"
I1205 00:00:29.577950 221677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77362.pem
I1205 00:00:29.581831 221677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 4 23:19 /usr/share/ca-certificates/77362.pem
I1205 00:00:29.581898 221677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77362.pem
I1205 00:00:29.588733 221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77362.pem /etc/ssl/certs/3ec20f2e.0"
I1205 00:00:29.597904 221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1205 00:00:29.612020 221677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1205 00:00:29.616296 221677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 4 23:11 /usr/share/ca-certificates/minikubeCA.pem
I1205 00:00:29.616413 221677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1205 00:00:29.628338 221677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1205 00:00:29.637879 221677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1205 00:00:29.641327 221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1205 00:00:29.648039 221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1205 00:00:29.654960 221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1205 00:00:29.662103 221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1205 00:00:29.669444 221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1205 00:00:29.676413 221677 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1205 00:00:29.683895 221677 kubeadm.go:392] StartCluster: {Name:no-preload-013030 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-013030 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1205 00:00:29.684039 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1205 00:00:29.684137 221677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1205 00:00:29.730293 221677 cri.go:89] found id: "3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
I1205 00:00:29.730356 221677 cri.go:89] found id: "fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
I1205 00:00:29.730374 221677 cri.go:89] found id: "8c5755436bd099e0109e8164517c428a7492b4ba0b822bf3510106d259f125a0"
I1205 00:00:29.730394 221677 cri.go:89] found id: "f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
I1205 00:00:29.730398 221677 cri.go:89] found id: "627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
I1205 00:00:29.730402 221677 cri.go:89] found id: "8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
I1205 00:00:29.730405 221677 cri.go:89] found id: "e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
I1205 00:00:29.730408 221677 cri.go:89] found id: "55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
I1205 00:00:29.730411 221677 cri.go:89] found id: ""
I1205 00:00:29.730479 221677 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1205 00:00:29.742877 221677 cri.go:116] JSON = null
W1205 00:00:29.742953 221677 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I1205 00:00:29.743046 221677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1205 00:00:29.751751 221677 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1205 00:00:29.751775 221677 kubeadm.go:593] restartPrimaryControlPlane start ...
I1205 00:00:29.751847 221677 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1205 00:00:29.760713 221677 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1205 00:00:29.761508 221677 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-013030" does not appear in /home/jenkins/minikube-integration/20045-2283/kubeconfig
I1205 00:00:29.761788 221677 kubeconfig.go:62] /home/jenkins/minikube-integration/20045-2283/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-013030" cluster setting kubeconfig missing "no-preload-013030" context setting]
I1205 00:00:29.762765 221677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-2283/kubeconfig: {Name:mka3b7dd57c7b1524b8db81fd47d2a503644c81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 00:00:29.764269 221677 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1205 00:00:29.773839 221677 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I1205 00:00:29.773871 221677 kubeadm.go:597] duration metric: took 22.089863ms to restartPrimaryControlPlane
I1205 00:00:29.773880 221677 kubeadm.go:394] duration metric: took 89.995888ms to StartCluster
I1205 00:00:29.773897 221677 settings.go:142] acquiring lock: {Name:mkf88c0c5090e30b7bb8c2e4a8e4f7c9dd68316c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 00:00:29.773966 221677 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20045-2283/kubeconfig
I1205 00:00:29.774915 221677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-2283/kubeconfig: {Name:mka3b7dd57c7b1524b8db81fd47d2a503644c81e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1205 00:00:29.775159 221677 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1205 00:00:29.775514 221677 config.go:182] Loaded profile config "no-preload-013030": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1205 00:00:29.775598 221677 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1205 00:00:29.775710 221677 addons.go:69] Setting storage-provisioner=true in profile "no-preload-013030"
I1205 00:00:29.775736 221677 addons.go:234] Setting addon storage-provisioner=true in "no-preload-013030"
W1205 00:00:29.775747 221677 addons.go:243] addon storage-provisioner should already be in state true
I1205 00:00:29.775769 221677 host.go:66] Checking if "no-preload-013030" exists ...
I1205 00:00:29.776260 221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
I1205 00:00:29.776667 221677 addons.go:69] Setting default-storageclass=true in profile "no-preload-013030"
I1205 00:00:29.776689 221677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-013030"
I1205 00:00:29.777031 221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
I1205 00:00:29.777095 221677 addons.go:69] Setting metrics-server=true in profile "no-preload-013030"
I1205 00:00:29.777154 221677 addons.go:234] Setting addon metrics-server=true in "no-preload-013030"
W1205 00:00:29.777163 221677 addons.go:243] addon metrics-server should already be in state true
I1205 00:00:29.777261 221677 host.go:66] Checking if "no-preload-013030" exists ...
I1205 00:00:29.777812 221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
I1205 00:00:29.779136 221677 addons.go:69] Setting dashboard=true in profile "no-preload-013030"
I1205 00:00:29.779157 221677 addons.go:234] Setting addon dashboard=true in "no-preload-013030"
W1205 00:00:29.779164 221677 addons.go:243] addon dashboard should already be in state true
I1205 00:00:29.779186 221677 host.go:66] Checking if "no-preload-013030" exists ...
I1205 00:00:29.779788 221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
I1205 00:00:29.780266 221677 out.go:177] * Verifying Kubernetes components...
I1205 00:00:29.783147 221677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1205 00:00:29.847678 221677 addons.go:234] Setting addon default-storageclass=true in "no-preload-013030"
W1205 00:00:29.847706 221677 addons.go:243] addon default-storageclass should already be in state true
I1205 00:00:29.847733 221677 host.go:66] Checking if "no-preload-013030" exists ...
I1205 00:00:29.853116 221677 cli_runner.go:164] Run: docker container inspect no-preload-013030 --format={{.State.Status}}
I1205 00:00:29.872872 221677 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1205 00:00:29.876631 221677 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1205 00:00:29.880331 221677 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1205 00:00:29.880372 221677 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1205 00:00:29.880443 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:29.880623 221677 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1205 00:00:29.886454 221677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1205 00:00:29.886590 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:29.892273 221677 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1205 00:00:29.896035 221677 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1205 00:00:28.141044 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:30.641472 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:29.901542 221677 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1205 00:00:29.901626 221677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1205 00:00:29.901709 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:29.902419 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1205 00:00:29.902439 221677 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1205 00:00:29.902502 221677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-013030
I1205 00:00:29.956071 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:29.956610 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:29.967217 221677 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1205 00:00:29.981943 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:29.991877 221677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/20045-2283/.minikube/machines/no-preload-013030/id_rsa Username:docker}
I1205 00:00:30.005080 221677 node_ready.go:35] waiting up to 6m0s for node "no-preload-013030" to be "Ready" ...
I1205 00:00:30.231598 221677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1205 00:00:30.231678 221677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1205 00:00:30.281468 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1205 00:00:30.281546 221677 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1205 00:00:30.287468 221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1205 00:00:30.342249 221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1205 00:00:30.350973 221677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1205 00:00:30.351054 221677 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1205 00:00:30.427776 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1205 00:00:30.427861 221677 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1205 00:00:30.514892 221677 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1205 00:00:30.514983 221677 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1205 00:00:30.583523 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1205 00:00:30.583609 221677 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
W1205 00:00:30.658635 221677 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1205 00:00:30.658724 221677 retry.go:31] will retry after 340.301102ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
I1205 00:00:30.767621 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1205 00:00:30.767683 221677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1205 00:00:30.824694 221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1205 00:00:30.826945 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1205 00:00:30.826973 221677 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1205 00:00:30.875477 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1205 00:00:30.875506 221677 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1205 00:00:30.946308 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1205 00:00:30.946334 221677 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1205 00:00:30.993254 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1205 00:00:30.993279 221677 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1205 00:00:31.000120 221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1205 00:00:31.067610 221677 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1205 00:00:31.067636 221677 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1205 00:00:31.161892 221677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1205 00:00:33.124970 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:35.125567 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:37.125828 216030 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:35.416417 221677 node_ready.go:49] node "no-preload-013030" has status "Ready":"True"
I1205 00:00:35.416494 221677 node_ready.go:38] duration metric: took 5.411360289s for node "no-preload-013030" to be "Ready" ...
I1205 00:00:35.416519 221677 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1205 00:00:35.537640 221677 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-xgmhd" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.625207 221677 pod_ready.go:93] pod "coredns-7c65d6cfc9-xgmhd" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:35.625229 221677 pod_ready.go:82] duration metric: took 87.512881ms for pod "coredns-7c65d6cfc9-xgmhd" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.625241 221677 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-013030" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.639987 221677 pod_ready.go:93] pod "etcd-no-preload-013030" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:35.640013 221677 pod_ready.go:82] duration metric: took 14.764467ms for pod "etcd-no-preload-013030" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.640027 221677 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-013030" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.670217 221677 pod_ready.go:93] pod "kube-apiserver-no-preload-013030" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:35.670242 221677 pod_ready.go:82] duration metric: took 30.206499ms for pod "kube-apiserver-no-preload-013030" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.670254 221677 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-013030" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.679710 221677 pod_ready.go:93] pod "kube-controller-manager-no-preload-013030" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:35.679734 221677 pod_ready.go:82] duration metric: took 9.471351ms for pod "kube-controller-manager-no-preload-013030" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.679748 221677 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7qgmh" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.685719 221677 pod_ready.go:93] pod "kube-proxy-7qgmh" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:35.685756 221677 pod_ready.go:82] duration metric: took 6.001285ms for pod "kube-proxy-7qgmh" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.685767 221677 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-013030" in "kube-system" namespace to be "Ready" ...
I1205 00:00:35.848196 221677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.505863869s)
I1205 00:00:37.694670 221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:38.308016 221677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.483235281s)
I1205 00:00:38.308051 221677 addons.go:475] Verifying addon metrics-server=true in "no-preload-013030"
I1205 00:00:38.379601 221677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.379439848s)
I1205 00:00:38.495688 221677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.333749321s)
I1205 00:00:38.498416 221677 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p no-preload-013030 addons enable metrics-server
I1205 00:00:38.501159 221677 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I1205 00:00:39.622706 216030 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:39.622731 216030 pod_ready.go:82] duration metric: took 1m7.50576737s for pod "kube-controller-manager-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1205 00:00:39.622744 216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xh97b" in "kube-system" namespace to be "Ready" ...
I1205 00:00:39.627598 216030 pod_ready.go:93] pod "kube-proxy-xh97b" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:39.627663 216030 pod_ready.go:82] duration metric: took 4.909057ms for pod "kube-proxy-xh97b" in "kube-system" namespace to be "Ready" ...
I1205 00:00:39.627682 216030 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1205 00:00:41.635075 216030 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:38.503913 221677 addons.go:510] duration metric: took 8.728324608s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I1205 00:00:39.695406 221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:41.698192 221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:44.133262 216030 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:45.634685 216030 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:45.634754 216030 pod_ready.go:82] duration metric: took 6.007062956s for pod "kube-scheduler-old-k8s-version-066167" in "kube-system" namespace to be "Ready" ...
I1205 00:00:45.634781 216030 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace to be "Ready" ...
I1205 00:00:44.192232 221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:46.192379 221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:47.641160 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:50.142040 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:48.692355 221677 pod_ready.go:103] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:49.691651 221677 pod_ready.go:93] pod "kube-scheduler-no-preload-013030" in "kube-system" namespace has status "Ready":"True"
I1205 00:00:49.691676 221677 pod_ready.go:82] duration metric: took 14.005902003s for pod "kube-scheduler-no-preload-013030" in "kube-system" namespace to be "Ready" ...
I1205 00:00:49.691688 221677 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace to be "Ready" ...
I1205 00:00:51.698244 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:52.640397 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:54.641624 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:57.141636 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:54.199619 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:56.697770 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:59.640966 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:01.641368 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:00:59.198473 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:01.199043 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:04.141819 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:06.641245 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:03.698734 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:06.197658 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:08.643778 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:11.142085 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:08.197916 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:10.198915 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:12.697351 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:13.142210 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:15.143248 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:14.698896 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:17.197638 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:17.640366 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:19.642863 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:22.141401 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:19.198416 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:21.198698 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:24.141731 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:26.640254 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:23.697698 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:25.698515 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:27.698708 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:29.141453 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:31.640747 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:30.197920 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:32.698572 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:33.640815 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:35.640860 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:35.199281 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:37.698663 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:37.641357 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:40.141576 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:42.142551 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:40.197605 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:42.200552 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:44.640790 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:46.640978 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:44.698301 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:46.698450 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:49.140930 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:51.640575 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:49.198002 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:51.198178 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:54.141681 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:56.640948 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:53.698451 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:56.198918 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:58.641251 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:01.140947 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:01:58.697785 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:00.698074 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:02.703184 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:03.141906 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:05.641253 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:05.198730 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:07.697525 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:08.140771 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:10.141503 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:12.141977 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:09.698188 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:11.698438 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:14.640789 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:16.640823 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:13.698487 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:16.199130 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:18.641073 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:20.641191 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:18.697781 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:20.697982 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:22.641262 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:25.142092 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:27.142352 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:23.197363 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:25.197966 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:27.698390 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:29.640668 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:32.144704 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:30.198580 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:32.698095 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:34.641267 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:37.141776 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:34.699022 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:36.699177 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:39.640880 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:42.143365 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:39.198254 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:41.697801 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:44.641367 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:47.141280 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:43.697979 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:45.698225 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:47.698311 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:49.141788 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:51.141822 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:49.698861 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:52.198596 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:53.186734 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:55.641386 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:54.697722 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:57.197390 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:58.141377 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:00.190535 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:02:59.197621 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:01.198155 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:02.640575 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:04.641218 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:07.141448 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:03.200383 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:05.697829 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:07.698244 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:09.142518 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:11.646299 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:09.698594 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:11.699095 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:14.140064 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:16.141711 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:14.198278 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:16.698329 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:18.640395 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:20.641321 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:19.197542 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:21.198082 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:23.141469 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:25.142104 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:23.697909 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:26.198217 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:27.641721 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:30.141599 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:28.697745 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:30.697967 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:32.698010 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:32.640894 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:34.641205 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:37.141279 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:34.701701 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:37.197478 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:39.141499 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:41.141843 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:39.697685 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:41.698710 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:43.142457 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:45.642402 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:43.702407 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:46.197923 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:48.141074 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:50.640840 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:48.698167 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:51.197759 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:52.640941 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:55.142516 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:53.700214 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:56.198709 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:57.641042 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:00.258272 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:03:58.698614 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:01.202258 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:02.640707 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:04.640786 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:06.640980 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:03.698025 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:05.702558 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:08.641054 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:11.146089 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:08.197928 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:10.198008 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:12.697293 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:13.640923 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:16.141477 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:14.698200 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:16.698405 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:18.641364 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:21.154913 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:19.199197 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:21.698068 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:23.640479 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:25.641079 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:24.197565 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:26.197872 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:27.642694 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:30.141328 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:32.142061 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:28.698663 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:31.197409 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:34.646681 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:37.141273 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:33.198422 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:35.709621 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:39.142582 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:41.641272 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:38.197579 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:40.199264 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:42.199775 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:44.154672 216030 pod_ready.go:103] pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:45.641935 216030 pod_ready.go:82] duration metric: took 4m0.007127886s for pod "metrics-server-9975d5f86-ksvdj" in "kube-system" namespace to be "Ready" ...
E1205 00:04:45.641961 216030 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1205 00:04:45.641970 216030 pod_ready.go:39] duration metric: took 5m23.689087349s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1205 00:04:45.641984 216030 api_server.go:52] waiting for apiserver process to appear ...
I1205 00:04:45.642014 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1205 00:04:45.642080 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1205 00:04:45.701396 216030 cri.go:89] found id: "d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
I1205 00:04:45.701417 216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1205 00:04:45.701422 216030 cri.go:89] found id: ""
I1205 00:04:45.701428 216030 logs.go:282] 2 containers: [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8]
I1205 00:04:45.701487 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.706274 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.709870 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1205 00:04:45.709950 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1205 00:04:45.752726 216030 cri.go:89] found id: "d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
I1205 00:04:45.752759 216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1205 00:04:45.752764 216030 cri.go:89] found id: ""
I1205 00:04:45.752771 216030 logs.go:282] 2 containers: [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e]
I1205 00:04:45.752844 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.756595 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.759984 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1205 00:04:45.760054 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1205 00:04:45.802699 216030 cri.go:89] found id: "18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
I1205 00:04:45.802722 216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1205 00:04:45.802733 216030 cri.go:89] found id: ""
I1205 00:04:45.802741 216030 logs.go:282] 2 containers: [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c]
I1205 00:04:45.802798 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.806565 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.810357 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1205 00:04:45.810434 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1205 00:04:45.853797 216030 cri.go:89] found id: "4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
I1205 00:04:45.853818 216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1205 00:04:45.853823 216030 cri.go:89] found id: ""
I1205 00:04:45.853832 216030 logs.go:282] 2 containers: [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0]
I1205 00:04:45.853889 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.857263 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.862164 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1205 00:04:45.862243 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1205 00:04:45.902320 216030 cri.go:89] found id: "355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
I1205 00:04:45.902409 216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1205 00:04:45.902423 216030 cri.go:89] found id: ""
I1205 00:04:45.902431 216030 logs.go:282] 2 containers: [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88]
I1205 00:04:45.902501 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.906129 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.909489 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1205 00:04:45.909590 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1205 00:04:45.951353 216030 cri.go:89] found id: "0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
I1205 00:04:45.951376 216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1205 00:04:45.951381 216030 cri.go:89] found id: ""
I1205 00:04:45.951388 216030 logs.go:282] 2 containers: [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15]
I1205 00:04:45.951449 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.955123 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:45.958548 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1205 00:04:45.958621 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1205 00:04:46.013456 216030 cri.go:89] found id: "9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
I1205 00:04:46.013484 216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1205 00:04:46.013489 216030 cri.go:89] found id: ""
I1205 00:04:46.013497 216030 logs.go:282] 2 containers: [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d]
I1205 00:04:46.013620 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.018166 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.022058 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1205 00:04:46.022188 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1205 00:04:46.071154 216030 cri.go:89] found id: "eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
I1205 00:04:46.071186 216030 cri.go:89] found id: ""
I1205 00:04:46.071195 216030 logs.go:282] 1 containers: [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e]
I1205 00:04:46.071278 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.075279 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1205 00:04:46.075401 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1205 00:04:46.115487 216030 cri.go:89] found id: "61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
I1205 00:04:46.115560 216030 cri.go:89] found id: "cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
I1205 00:04:46.115580 216030 cri.go:89] found id: ""
I1205 00:04:46.115593 216030 logs.go:282] 2 containers: [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf]
I1205 00:04:46.115669 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.119363 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:46.122924 216030 logs.go:123] Gathering logs for coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] ...
I1205 00:04:46.122956 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1205 00:04:46.164473 216030 logs.go:123] Gathering logs for storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] ...
I1205 00:04:46.164503 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
I1205 00:04:46.219238 216030 logs.go:123] Gathering logs for describe nodes ...
I1205 00:04:46.219270 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1205 00:04:46.367441 216030 logs.go:123] Gathering logs for kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] ...
I1205 00:04:46.367470 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
I1205 00:04:46.406779 216030 logs.go:123] Gathering logs for kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] ...
I1205 00:04:46.406805 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
I1205 00:04:46.454765 216030 logs.go:123] Gathering logs for kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] ...
I1205 00:04:46.454792 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1205 00:04:46.498510 216030 logs.go:123] Gathering logs for kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] ...
I1205 00:04:46.498538 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
I1205 00:04:46.537447 216030 logs.go:123] Gathering logs for containerd ...
I1205 00:04:46.537476 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1205 00:04:46.617148 216030 logs.go:123] Gathering logs for etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] ...
I1205 00:04:46.617196 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
I1205 00:04:46.667834 216030 logs.go:123] Gathering logs for kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] ...
I1205 00:04:46.667985 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
I1205 00:04:46.732274 216030 logs.go:123] Gathering logs for kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] ...
I1205 00:04:46.732303 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1205 00:04:46.792624 216030 logs.go:123] Gathering logs for kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] ...
I1205 00:04:46.792656 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1205 00:04:46.830707 216030 logs.go:123] Gathering logs for storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] ...
I1205 00:04:46.830736 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
I1205 00:04:46.875737 216030 logs.go:123] Gathering logs for kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] ...
I1205 00:04:46.875769 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1205 00:04:46.960343 216030 logs.go:123] Gathering logs for dmesg ...
I1205 00:04:46.960376 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1205 00:04:46.978879 216030 logs.go:123] Gathering logs for kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] ...
I1205 00:04:46.978908 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
I1205 00:04:47.043184 216030 logs.go:123] Gathering logs for etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] ...
I1205 00:04:47.043220 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1205 00:04:47.095108 216030 logs.go:123] Gathering logs for coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] ...
I1205 00:04:47.095137 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
I1205 00:04:47.138073 216030 logs.go:123] Gathering logs for kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] ...
I1205 00:04:47.138112 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
I1205 00:04:44.698855 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:46.698935 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:47.200917 216030 logs.go:123] Gathering logs for kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] ...
I1205 00:04:47.200959 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1205 00:04:47.290017 216030 logs.go:123] Gathering logs for container status ...
I1205 00:04:47.290077 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1205 00:04:47.355835 216030 logs.go:123] Gathering logs for kubelet ...
I1205 00:04:47.355861 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1205 00:04:47.415957 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029161 664 reflector.go:138] object-"kube-system"/"kindnet-token-rrxv8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rrxv8" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:47.416229 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029640 664 reflector.go:138] object-"default"/"default-token-6q5g5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6q5g5" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:47.416462 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029889 664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7b2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7b2f" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:47.422607 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:23 old-k8s-version-066167 kubelet[664]: E1204 23:59:23.455493 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.422894 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:24 old-k8s-version-066167 kubelet[664]: E1204 23:59:24.408536 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.425873 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:36 old-k8s-version-066167 kubelet[664]: E1204 23:59:36.194820 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.427977 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:46 old-k8s-version-066167 kubelet[664]: E1204 23:59:46.769553 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.428166 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.173711 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.428495 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.774292 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.429164 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.679719 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.429606 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.812800 664 pod_workers.go:191] Error syncing pod 81fe575b-ab3c-49a1-b013-84ec8c0bea1c ("storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"
W1205 00:04:47.432365 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:00 old-k8s-version-066167 kubelet[664]: E1205 00:00:00.315343 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.432950 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:08 old-k8s-version-066167 kubelet[664]: E1205 00:00:08.854222 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.433315 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:11 old-k8s-version-066167 kubelet[664]: E1205 00:00:11.169368 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.433645 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:15 old-k8s-version-066167 kubelet[664]: E1205 00:00:15.678257 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.433831 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:26 old-k8s-version-066167 kubelet[664]: E1205 00:00:26.169392 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.434156 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:27 old-k8s-version-066167 kubelet[664]: E1205 00:00:27.168949 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.434742 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:39 old-k8s-version-066167 kubelet[664]: E1205 00:00:39.964267 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.437168 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:41 old-k8s-version-066167 kubelet[664]: E1205 00:00:41.177237 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.437499 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:45 old-k8s-version-066167 kubelet[664]: E1205 00:00:45.677813 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.437686 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:54 old-k8s-version-066167 kubelet[664]: E1205 00:00:54.170310 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.438017 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:57 old-k8s-version-066167 kubelet[664]: E1205 00:00:57.168714 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.438200 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:07 old-k8s-version-066167 kubelet[664]: E1205 00:01:07.169610 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.438538 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:08 old-k8s-version-066167 kubelet[664]: E1205 00:01:08.169137 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.439120 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.080372 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.439303 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.172882 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.439631 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:25 old-k8s-version-066167 kubelet[664]: E1205 00:01:25.677810 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.439814 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:36 old-k8s-version-066167 kubelet[664]: E1205 00:01:36.169551 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.440143 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:39 old-k8s-version-066167 kubelet[664]: E1205 00:01:39.170525 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.440328 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:50 old-k8s-version-066167 kubelet[664]: E1205 00:01:50.170399 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.440657 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:52 old-k8s-version-066167 kubelet[664]: E1205 00:01:52.168832 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.441030 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:03 old-k8s-version-066167 kubelet[664]: E1205 00:02:03.168796 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.443463 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:04 old-k8s-version-066167 kubelet[664]: E1205 00:02:04.179849 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:47.443782 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.169877 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.443980 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.170577 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.444164 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:27 old-k8s-version-066167 kubelet[664]: E1205 00:02:27.169252 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.444490 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:30 old-k8s-version-066167 kubelet[664]: E1205 00:02:30.169307 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.444674 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:40 old-k8s-version-066167 kubelet[664]: E1205 00:02:40.172577 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.445266 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:43 old-k8s-version-066167 kubelet[664]: E1205 00:02:43.346410 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.445596 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:45 old-k8s-version-066167 kubelet[664]: E1205 00:02:45.677951 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.445781 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:53 old-k8s-version-066167 kubelet[664]: E1205 00:02:53.169354 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.446106 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:58 old-k8s-version-066167 kubelet[664]: E1205 00:02:58.169560 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.446289 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:08 old-k8s-version-066167 kubelet[664]: E1205 00:03:08.172186 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.446622 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:12 old-k8s-version-066167 kubelet[664]: E1205 00:03:12.169424 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.446806 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:21 old-k8s-version-066167 kubelet[664]: E1205 00:03:21.169226 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.447136 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:23 old-k8s-version-066167 kubelet[664]: E1205 00:03:23.168967 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.447319 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:34 old-k8s-version-066167 kubelet[664]: E1205 00:03:34.173087 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.447646 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.447972 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.448154 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.448468 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.448666 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.448992 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.449185 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.449511 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.449719 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.450052 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.450238 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1205 00:04:47.450255 216030 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:47.450266 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1205 00:04:47.450326 216030 out.go:270] X Problems detected in kubelet:
W1205 00:04:47.450338 216030 out.go:270] Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.450346 216030 out.go:270] Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.450353 216030 out.go:270] Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:47.450362 216030 out.go:270] Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:47.450370 216030 out.go:270] Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I1205 00:04:47.450378 216030 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:47.450384 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 00:04:49.198557 221677 pod_ready.go:103] pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace has status "Ready":"False"
I1205 00:04:49.698032 221677 pod_ready.go:82] duration metric: took 4m0.00632943s for pod "metrics-server-6867b74b74-kz2tf" in "kube-system" namespace to be "Ready" ...
E1205 00:04:49.698060 221677 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1205 00:04:49.698069 221677 pod_ready.go:39] duration metric: took 4m14.281527329s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1205 00:04:49.698084 221677 api_server.go:52] waiting for apiserver process to appear ...
I1205 00:04:49.698114 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1205 00:04:49.698172 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1205 00:04:49.742404 221677 cri.go:89] found id: "ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad"
I1205 00:04:49.742429 221677 cri.go:89] found id: "e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
I1205 00:04:49.742433 221677 cri.go:89] found id: ""
I1205 00:04:49.742441 221677 logs.go:282] 2 containers: [ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c]
I1205 00:04:49.742497 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.746365 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.750155 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1205 00:04:49.750233 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1205 00:04:49.789016 221677 cri.go:89] found id: "7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502"
I1205 00:04:49.789040 221677 cri.go:89] found id: "55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
I1205 00:04:49.789046 221677 cri.go:89] found id: ""
I1205 00:04:49.789053 221677 logs.go:282] 2 containers: [7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502 55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778]
I1205 00:04:49.789161 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.792800 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.796296 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1205 00:04:49.796370 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1205 00:04:49.833967 221677 cri.go:89] found id: "340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc"
I1205 00:04:49.833990 221677 cri.go:89] found id: "3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
I1205 00:04:49.833996 221677 cri.go:89] found id: ""
I1205 00:04:49.834004 221677 logs.go:282] 2 containers: [340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc 3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9]
I1205 00:04:49.834082 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.837887 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.841454 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1205 00:04:49.841550 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1205 00:04:49.878206 221677 cri.go:89] found id: "253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64"
I1205 00:04:49.878231 221677 cri.go:89] found id: "627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
I1205 00:04:49.878235 221677 cri.go:89] found id: ""
I1205 00:04:49.878243 221677 logs.go:282] 2 containers: [253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64 627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6]
I1205 00:04:49.878302 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.882058 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.885685 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1205 00:04:49.885762 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1205 00:04:49.923095 221677 cri.go:89] found id: "80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c"
I1205 00:04:49.923179 221677 cri.go:89] found id: "f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
I1205 00:04:49.923199 221677 cri.go:89] found id: ""
I1205 00:04:49.923211 221677 logs.go:282] 2 containers: [80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed]
I1205 00:04:49.923274 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.926709 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.930399 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1205 00:04:49.930497 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1205 00:04:49.973709 221677 cri.go:89] found id: "13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94"
I1205 00:04:49.973774 221677 cri.go:89] found id: "8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
I1205 00:04:49.973795 221677 cri.go:89] found id: ""
I1205 00:04:49.973820 221677 logs.go:282] 2 containers: [13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94 8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654]
I1205 00:04:49.973896 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.977814 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:49.981292 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1205 00:04:49.981394 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1205 00:04:50.029660 221677 cri.go:89] found id: "0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3"
I1205 00:04:50.029727 221677 cri.go:89] found id: "fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
I1205 00:04:50.029745 221677 cri.go:89] found id: ""
I1205 00:04:50.029760 221677 logs.go:282] 2 containers: [0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3 fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be]
I1205 00:04:50.029823 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:50.034042 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:50.038266 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1205 00:04:50.038363 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1205 00:04:50.079366 221677 cri.go:89] found id: "317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393"
I1205 00:04:50.079398 221677 cri.go:89] found id: ""
I1205 00:04:50.079406 221677 logs.go:282] 1 containers: [317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393]
I1205 00:04:50.079464 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:50.083616 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1205 00:04:50.083723 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1205 00:04:50.123759 221677 cri.go:89] found id: "473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8"
I1205 00:04:50.123787 221677 cri.go:89] found id: "73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e"
I1205 00:04:50.123793 221677 cri.go:89] found id: ""
I1205 00:04:50.123800 221677 logs.go:282] 2 containers: [473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8 73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e]
I1205 00:04:50.123858 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:50.127613 221677 ssh_runner.go:195] Run: which crictl
I1205 00:04:50.132424 221677 logs.go:123] Gathering logs for coredns [3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9] ...
I1205 00:04:50.132452 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
I1205 00:04:50.177377 221677 logs.go:123] Gathering logs for kindnet [0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3] ...
I1205 00:04:50.177411 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3"
I1205 00:04:50.221418 221677 logs.go:123] Gathering logs for kubernetes-dashboard [317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393] ...
I1205 00:04:50.221450 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393"
I1205 00:04:50.272318 221677 logs.go:123] Gathering logs for describe nodes ...
I1205 00:04:50.272349 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1205 00:04:50.429061 221677 logs.go:123] Gathering logs for etcd [7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502] ...
I1205 00:04:50.429091 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502"
I1205 00:04:50.479805 221677 logs.go:123] Gathering logs for etcd [55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778] ...
I1205 00:04:50.479835 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
I1205 00:04:50.541743 221677 logs.go:123] Gathering logs for kubelet ...
I1205 00:04:50.541781 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1205 00:04:50.592967 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.479571 658 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
W1205 00:04:50.593244 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.479778 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:04:50.593431 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480370 658 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
W1205 00:04:50.593674 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480506 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:04:50.593861 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480651 658 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-013030' and this object
W1205 00:04:50.594094 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480748 658 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:04:50.594285 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.488143 658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
W1205 00:04:50.594508 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.488360 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
I1205 00:04:50.644820 221677 logs.go:123] Gathering logs for kube-apiserver [e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c] ...
I1205 00:04:50.644860 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
I1205 00:04:50.713283 221677 logs.go:123] Gathering logs for kube-scheduler [253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64] ...
I1205 00:04:50.713321 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64"
I1205 00:04:50.773677 221677 logs.go:123] Gathering logs for kube-scheduler [627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6] ...
I1205 00:04:50.773727 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
I1205 00:04:50.850795 221677 logs.go:123] Gathering logs for kube-controller-manager [8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654] ...
I1205 00:04:50.850871 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
I1205 00:04:50.929818 221677 logs.go:123] Gathering logs for storage-provisioner [473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8] ...
I1205 00:04:50.929890 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8"
I1205 00:04:50.974600 221677 logs.go:123] Gathering logs for storage-provisioner [73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e] ...
I1205 00:04:50.974633 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e"
I1205 00:04:51.032274 221677 logs.go:123] Gathering logs for containerd ...
I1205 00:04:51.032302 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1205 00:04:51.101703 221677 logs.go:123] Gathering logs for dmesg ...
I1205 00:04:51.101797 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1205 00:04:51.120293 221677 logs.go:123] Gathering logs for kube-apiserver [ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad] ...
I1205 00:04:51.120325 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad"
I1205 00:04:51.191956 221677 logs.go:123] Gathering logs for coredns [340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc] ...
I1205 00:04:51.192048 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc"
I1205 00:04:51.249282 221677 logs.go:123] Gathering logs for kube-proxy [80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c] ...
I1205 00:04:51.249313 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c"
I1205 00:04:51.297608 221677 logs.go:123] Gathering logs for kube-proxy [f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed] ...
I1205 00:04:51.297638 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
I1205 00:04:51.340440 221677 logs.go:123] Gathering logs for kube-controller-manager [13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94] ...
I1205 00:04:51.340467 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94"
I1205 00:04:51.412346 221677 logs.go:123] Gathering logs for kindnet [fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be] ...
I1205 00:04:51.412383 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
I1205 00:04:51.457251 221677 logs.go:123] Gathering logs for container status ...
I1205 00:04:51.457284 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1205 00:04:51.506219 221677 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:51.506242 221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1205 00:04:51.506295 221677 out.go:270] X Problems detected in kubelet:
W1205 00:04:51.506307 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480506 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:04:51.506315 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480651 658 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-013030' and this object
W1205 00:04:51.506322 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480748 658 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:04:51.506328 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.488143 658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
W1205 00:04:51.506334 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.488360 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
I1205 00:04:51.506344 221677 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:51.506350 221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 00:04:57.451637 216030 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 00:04:57.463556 216030 api_server.go:72] duration metric: took 5m57.25534682s to wait for apiserver process to appear ...
I1205 00:04:57.463582 216030 api_server.go:88] waiting for apiserver healthz status ...
I1205 00:04:57.463617 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1205 00:04:57.463679 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1205 00:04:57.502613 216030 cri.go:89] found id: "d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
I1205 00:04:57.502634 216030 cri.go:89] found id: "138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1205 00:04:57.502639 216030 cri.go:89] found id: ""
I1205 00:04:57.502646 216030 logs.go:282] 2 containers: [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8]
I1205 00:04:57.502706 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.506578 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.510329 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1205 00:04:57.510403 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1205 00:04:57.549412 216030 cri.go:89] found id: "d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
I1205 00:04:57.549434 216030 cri.go:89] found id: "03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1205 00:04:57.549439 216030 cri.go:89] found id: ""
I1205 00:04:57.549446 216030 logs.go:282] 2 containers: [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e]
I1205 00:04:57.549522 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.553176 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.556561 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1205 00:04:57.556630 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1205 00:04:57.606322 216030 cri.go:89] found id: "18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
I1205 00:04:57.606344 216030 cri.go:89] found id: "9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1205 00:04:57.606349 216030 cri.go:89] found id: ""
I1205 00:04:57.606356 216030 logs.go:282] 2 containers: [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c]
I1205 00:04:57.606414 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.610546 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.614234 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1205 00:04:57.614302 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1205 00:04:57.657522 216030 cri.go:89] found id: "4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
I1205 00:04:57.657543 216030 cri.go:89] found id: "05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1205 00:04:57.657549 216030 cri.go:89] found id: ""
I1205 00:04:57.657556 216030 logs.go:282] 2 containers: [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0]
I1205 00:04:57.657619 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.661379 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.664752 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1205 00:04:57.664830 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1205 00:04:57.712770 216030 cri.go:89] found id: "355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
I1205 00:04:57.712861 216030 cri.go:89] found id: "f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1205 00:04:57.712880 216030 cri.go:89] found id: ""
I1205 00:04:57.712898 216030 logs.go:282] 2 containers: [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88]
I1205 00:04:57.712996 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.717580 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.721738 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1205 00:04:57.721819 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1205 00:04:57.759280 216030 cri.go:89] found id: "0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
I1205 00:04:57.759302 216030 cri.go:89] found id: "cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1205 00:04:57.759307 216030 cri.go:89] found id: ""
I1205 00:04:57.759314 216030 logs.go:282] 2 containers: [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15]
I1205 00:04:57.759371 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.763240 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.766739 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1205 00:04:57.766823 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1205 00:04:57.804341 216030 cri.go:89] found id: "9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
I1205 00:04:57.804366 216030 cri.go:89] found id: "3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1205 00:04:57.804372 216030 cri.go:89] found id: ""
I1205 00:04:57.804379 216030 logs.go:282] 2 containers: [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d]
I1205 00:04:57.804439 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.808307 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.811971 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1205 00:04:57.812044 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1205 00:04:57.865535 216030 cri.go:89] found id: "61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
I1205 00:04:57.865556 216030 cri.go:89] found id: "cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
I1205 00:04:57.865561 216030 cri.go:89] found id: ""
I1205 00:04:57.865568 216030 logs.go:282] 2 containers: [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf]
I1205 00:04:57.865627 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.869504 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.872895 216030 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1205 00:04:57.873022 216030 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1205 00:04:57.913425 216030 cri.go:89] found id: "eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
I1205 00:04:57.913449 216030 cri.go:89] found id: ""
I1205 00:04:57.913463 216030 logs.go:282] 1 containers: [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e]
I1205 00:04:57.913526 216030 ssh_runner.go:195] Run: which crictl
I1205 00:04:57.917503 216030 logs.go:123] Gathering logs for kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] ...
I1205 00:04:57.917529 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f"
I1205 00:04:57.959718 216030 logs.go:123] Gathering logs for kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] ...
I1205 00:04:57.959742 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196"
I1205 00:04:58.030401 216030 logs.go:123] Gathering logs for storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] ...
I1205 00:04:58.030436 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b"
I1205 00:04:58.089905 216030 logs.go:123] Gathering logs for storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] ...
I1205 00:04:58.089933 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf"
I1205 00:04:58.129773 216030 logs.go:123] Gathering logs for etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] ...
I1205 00:04:58.129861 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716"
I1205 00:04:58.170834 216030 logs.go:123] Gathering logs for coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] ...
I1205 00:04:58.170863 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c"
I1205 00:04:58.217420 216030 logs.go:123] Gathering logs for kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] ...
I1205 00:04:58.217449 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0"
I1205 00:04:58.264707 216030 logs.go:123] Gathering logs for kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] ...
I1205 00:04:58.264735 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88"
I1205 00:04:58.314661 216030 logs.go:123] Gathering logs for kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] ...
I1205 00:04:58.314686 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15"
I1205 00:04:58.372507 216030 logs.go:123] Gathering logs for kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] ...
I1205 00:04:58.372541 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d"
I1205 00:04:58.414881 216030 logs.go:123] Gathering logs for kubelet ...
I1205 00:04:58.414910 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1205 00:04:58.477133 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029161 664 reflector.go:138] object-"kube-system"/"kindnet-token-rrxv8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rrxv8" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:58.477409 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029640 664 reflector.go:138] object-"default"/"default-token-6q5g5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-6q5g5" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:58.477645 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:22 old-k8s-version-066167 kubelet[664]: E1204 23:59:22.029889 664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-f7b2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-f7b2f" is forbidden: User "system:node:old-k8s-version-066167" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-066167' and this object
W1205 00:04:58.483742 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:23 old-k8s-version-066167 kubelet[664]: E1204 23:59:23.455493 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.484031 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:24 old-k8s-version-066167 kubelet[664]: E1204 23:59:24.408536 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.486976 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:36 old-k8s-version-066167 kubelet[664]: E1204 23:59:36.194820 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.489032 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:46 old-k8s-version-066167 kubelet[664]: E1204 23:59:46.769553 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.489223 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.173711 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.489549 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:47 old-k8s-version-066167 kubelet[664]: E1204 23:59:47.774292 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.490248 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.679719 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.490681 216030 logs.go:138] Found kubelet problem: Dec 04 23:59:55 old-k8s-version-066167 kubelet[664]: E1204 23:59:55.812800 664 pod_workers.go:191] Error syncing pod 81fe575b-ab3c-49a1-b013-84ec8c0bea1c ("storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(81fe575b-ab3c-49a1-b013-84ec8c0bea1c)"
W1205 00:04:58.493488 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:00 old-k8s-version-066167 kubelet[664]: E1205 00:00:00.315343 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.494069 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:08 old-k8s-version-066167 kubelet[664]: E1205 00:00:08.854222 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.494382 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:11 old-k8s-version-066167 kubelet[664]: E1205 00:00:11.169368 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.494707 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:15 old-k8s-version-066167 kubelet[664]: E1205 00:00:15.678257 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.494888 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:26 old-k8s-version-066167 kubelet[664]: E1205 00:00:26.169392 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.495213 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:27 old-k8s-version-066167 kubelet[664]: E1205 00:00:27.168949 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.495824 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:39 old-k8s-version-066167 kubelet[664]: E1205 00:00:39.964267 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.498358 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:41 old-k8s-version-066167 kubelet[664]: E1205 00:00:41.177237 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.498692 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:45 old-k8s-version-066167 kubelet[664]: E1205 00:00:45.677813 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.498876 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:54 old-k8s-version-066167 kubelet[664]: E1205 00:00:54.170310 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.499205 216030 logs.go:138] Found kubelet problem: Dec 05 00:00:57 old-k8s-version-066167 kubelet[664]: E1205 00:00:57.168714 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.499388 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:07 old-k8s-version-066167 kubelet[664]: E1205 00:01:07.169610 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.499711 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:08 old-k8s-version-066167 kubelet[664]: E1205 00:01:08.169137 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.500296 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.080372 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.500479 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:22 old-k8s-version-066167 kubelet[664]: E1205 00:01:22.172882 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.500805 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:25 old-k8s-version-066167 kubelet[664]: E1205 00:01:25.677810 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.500988 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:36 old-k8s-version-066167 kubelet[664]: E1205 00:01:36.169551 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.501317 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:39 old-k8s-version-066167 kubelet[664]: E1205 00:01:39.170525 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.501505 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:50 old-k8s-version-066167 kubelet[664]: E1205 00:01:50.170399 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.501834 216030 logs.go:138] Found kubelet problem: Dec 05 00:01:52 old-k8s-version-066167 kubelet[664]: E1205 00:01:52.168832 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.502157 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:03 old-k8s-version-066167 kubelet[664]: E1205 00:02:03.168796 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.504721 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:04 old-k8s-version-066167 kubelet[664]: E1205 00:02:04.179849 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1205 00:04:58.505045 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.169877 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.505250 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:15 old-k8s-version-066167 kubelet[664]: E1205 00:02:15.170577 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.505435 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:27 old-k8s-version-066167 kubelet[664]: E1205 00:02:27.169252 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.505771 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:30 old-k8s-version-066167 kubelet[664]: E1205 00:02:30.169307 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.505960 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:40 old-k8s-version-066167 kubelet[664]: E1205 00:02:40.172577 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.506540 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:43 old-k8s-version-066167 kubelet[664]: E1205 00:02:43.346410 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.506865 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:45 old-k8s-version-066167 kubelet[664]: E1205 00:02:45.677951 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.507048 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:53 old-k8s-version-066167 kubelet[664]: E1205 00:02:53.169354 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.507371 216030 logs.go:138] Found kubelet problem: Dec 05 00:02:58 old-k8s-version-066167 kubelet[664]: E1205 00:02:58.169560 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.507552 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:08 old-k8s-version-066167 kubelet[664]: E1205 00:03:08.172186 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.507880 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:12 old-k8s-version-066167 kubelet[664]: E1205 00:03:12.169424 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.508061 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:21 old-k8s-version-066167 kubelet[664]: E1205 00:03:21.169226 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.508388 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:23 old-k8s-version-066167 kubelet[664]: E1205 00:03:23.168967 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.508586 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:34 old-k8s-version-066167 kubelet[664]: E1205 00:03:34.173087 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.508910 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.509240 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.509423 216030 logs.go:138] Found kubelet problem: Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.509780 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.509978 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.510307 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.510488 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.510817 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.511000 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.511326 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.511509 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:58.511836 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:58.514252 216030 logs.go:138] Found kubelet problem: Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I1205 00:04:58.514266 216030 logs.go:123] Gathering logs for dmesg ...
I1205 00:04:58.514280 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1205 00:04:58.534466 216030 logs.go:123] Gathering logs for describe nodes ...
I1205 00:04:58.534492 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1205 00:04:58.682096 216030 logs.go:123] Gathering logs for coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] ...
I1205 00:04:58.682123 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da"
I1205 00:04:58.725405 216030 logs.go:123] Gathering logs for kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] ...
I1205 00:04:58.725431 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e"
I1205 00:04:58.773720 216030 logs.go:123] Gathering logs for containerd ...
I1205 00:04:58.773748 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1205 00:04:58.836186 216030 logs.go:123] Gathering logs for kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] ...
I1205 00:04:58.836222 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae"
I1205 00:04:58.899828 216030 logs.go:123] Gathering logs for container status ...
I1205 00:04:58.899854 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1205 00:04:58.942941 216030 logs.go:123] Gathering logs for kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] ...
I1205 00:04:58.942971 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7"
I1205 00:04:59.018047 216030 logs.go:123] Gathering logs for kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] ...
I1205 00:04:59.018114 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8"
I1205 00:04:59.103130 216030 logs.go:123] Gathering logs for etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] ...
I1205 00:04:59.103163 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e"
I1205 00:04:59.151511 216030 logs.go:123] Gathering logs for kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] ...
I1205 00:04:59.151539 216030 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30"
I1205 00:04:59.192352 216030 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:59.192377 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1205 00:04:59.192481 216030 out.go:270] X Problems detected in kubelet:
W1205 00:04:59.192494 216030 out.go:270] Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:59.192519 216030 out.go:270] Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:59.192537 216030 out.go:270] Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1205 00:04:59.192564 216030 out.go:270] Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
W1205 00:04:59.192574 216030 out.go:270] Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I1205 00:04:59.192585 216030 out.go:358] Setting ErrFile to fd 2...
I1205 00:04:59.192591 216030 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 00:05:01.508204 221677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1205 00:05:01.522927 221677 api_server.go:72] duration metric: took 4m31.747740213s to wait for apiserver process to appear ...
I1205 00:05:01.522953 221677 api_server.go:88] waiting for apiserver healthz status ...
I1205 00:05:01.522997 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1205 00:05:01.523070 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1205 00:05:01.570928 221677 cri.go:89] found id: "ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad"
I1205 00:05:01.570955 221677 cri.go:89] found id: "e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
I1205 00:05:01.570961 221677 cri.go:89] found id: ""
I1205 00:05:01.570969 221677 logs.go:282] 2 containers: [ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c]
I1205 00:05:01.571031 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.575102 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.579218 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1205 00:05:01.579387 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1205 00:05:01.630856 221677 cri.go:89] found id: "7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502"
I1205 00:05:01.630879 221677 cri.go:89] found id: "55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
I1205 00:05:01.630884 221677 cri.go:89] found id: ""
I1205 00:05:01.630892 221677 logs.go:282] 2 containers: [7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502 55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778]
I1205 00:05:01.630954 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.635207 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.639199 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1205 00:05:01.639278 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1205 00:05:01.685416 221677 cri.go:89] found id: "340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc"
I1205 00:05:01.685444 221677 cri.go:89] found id: "3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
I1205 00:05:01.685449 221677 cri.go:89] found id: ""
I1205 00:05:01.685460 221677 logs.go:282] 2 containers: [340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc 3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9]
I1205 00:05:01.685573 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.690213 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.694387 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1205 00:05:01.694473 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1205 00:05:01.743952 221677 cri.go:89] found id: "253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64"
I1205 00:05:01.743979 221677 cri.go:89] found id: "627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
I1205 00:05:01.743984 221677 cri.go:89] found id: ""
I1205 00:05:01.743993 221677 logs.go:282] 2 containers: [253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64 627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6]
I1205 00:05:01.744058 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.748795 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.753296 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1205 00:05:01.753409 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1205 00:05:01.803333 221677 cri.go:89] found id: "80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c"
I1205 00:05:01.803359 221677 cri.go:89] found id: "f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
I1205 00:05:01.803366 221677 cri.go:89] found id: ""
I1205 00:05:01.803375 221677 logs.go:282] 2 containers: [80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed]
I1205 00:05:01.803474 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.808123 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.812320 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1205 00:05:01.812434 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1205 00:05:01.869520 221677 cri.go:89] found id: "13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94"
I1205 00:05:01.869542 221677 cri.go:89] found id: "8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
I1205 00:05:01.869547 221677 cri.go:89] found id: ""
I1205 00:05:01.869555 221677 logs.go:282] 2 containers: [13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94 8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654]
I1205 00:05:01.869655 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.874792 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.878997 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1205 00:05:01.879103 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1205 00:05:01.919002 221677 cri.go:89] found id: "0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3"
I1205 00:05:01.919029 221677 cri.go:89] found id: "fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
I1205 00:05:01.919035 221677 cri.go:89] found id: ""
I1205 00:05:01.919042 221677 logs.go:282] 2 containers: [0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3 fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be]
I1205 00:05:01.919198 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.924478 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.928689 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1205 00:05:01.928872 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1205 00:05:01.970375 221677 cri.go:89] found id: "317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393"
I1205 00:05:01.970443 221677 cri.go:89] found id: ""
I1205 00:05:01.970464 221677 logs.go:282] 1 containers: [317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393]
I1205 00:05:01.970549 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:01.974739 221677 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1205 00:05:01.974813 221677 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1205 00:05:02.022289 221677 cri.go:89] found id: "473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8"
I1205 00:05:02.022313 221677 cri.go:89] found id: "73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e"
I1205 00:05:02.022318 221677 cri.go:89] found id: ""
I1205 00:05:02.022327 221677 logs.go:282] 2 containers: [473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8 73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e]
I1205 00:05:02.022395 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:02.027944 221677 ssh_runner.go:195] Run: which crictl
I1205 00:05:02.032280 221677 logs.go:123] Gathering logs for kube-controller-manager [13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94] ...
I1205 00:05:02.032356 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 13dbb5b540ec61c07776fd8c10d5012b551ecbdba100e87b95658cbf143f7c94"
I1205 00:05:02.099044 221677 logs.go:123] Gathering logs for kindnet [0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3] ...
I1205 00:05:02.099083 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d3b3f831e649b680b52970070cb75e341fe1f99c1330298288f172ba4530ec3"
I1205 00:05:02.150515 221677 logs.go:123] Gathering logs for kubelet ...
I1205 00:05:02.150562 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1205 00:05:02.196355 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.479571 658 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
W1205 00:05:02.196636 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.479778 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:05:02.196816 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480370 658 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
W1205 00:05:02.197036 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480506 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:05:02.197232 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480651 658 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-013030' and this object
W1205 00:05:02.197465 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480748 658 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:05:02.197645 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.488143 658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
W1205 00:05:02.197869 221677 logs.go:138] Found kubelet problem: Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.488360 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
I1205 00:05:02.247348 221677 logs.go:123] Gathering logs for describe nodes ...
I1205 00:05:02.247398 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1205 00:05:02.401726 221677 logs.go:123] Gathering logs for kube-apiserver [e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c] ...
I1205 00:05:02.401760 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e59d31a209893f4ad2e1d4300b6465ae3451fa52714313850f76b71565ad4b4c"
I1205 00:05:02.471391 221677 logs.go:123] Gathering logs for etcd [7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502] ...
I1205 00:05:02.471421 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ce4eb943e35375eb4b894285fcd3e9c17575c593ed66cfa9ed2227d9d64b502"
I1205 00:05:02.526319 221677 logs.go:123] Gathering logs for coredns [3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9] ...
I1205 00:05:02.526353 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cec0c6a64bdf49c4348605f720b8b2b821dd9d692676a422f95e120ddf99ee9"
I1205 00:05:02.566270 221677 logs.go:123] Gathering logs for kube-scheduler [253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64] ...
I1205 00:05:02.566299 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 253d577421b81d1044c0cd40131b78cecd7ec82df0aa7858a28f50430cdadc64"
I1205 00:05:02.610704 221677 logs.go:123] Gathering logs for kube-proxy [f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed] ...
I1205 00:05:02.610788 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7c05e8f9b24d8c4795808e387a530338a63130683228e3c874d05b6593f65ed"
I1205 00:05:02.656045 221677 logs.go:123] Gathering logs for storage-provisioner [73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e] ...
I1205 00:05:02.656073 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 73ed4723997673d6fdec985ae83f035654502bf19a27513a5a26363fe3a08a3e"
I1205 00:05:02.697643 221677 logs.go:123] Gathering logs for containerd ...
I1205 00:05:02.697670 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1205 00:05:02.763297 221677 logs.go:123] Gathering logs for container status ...
I1205 00:05:02.763334 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1205 00:05:02.805439 221677 logs.go:123] Gathering logs for kube-apiserver [ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad] ...
I1205 00:05:02.805474 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecf2526b4e27411f33a8d8874de771b6a828dd338dc5f25c46fafff8e69a4aad"
I1205 00:05:02.857992 221677 logs.go:123] Gathering logs for coredns [340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc] ...
I1205 00:05:02.858025 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 340ce17b4ab3f81277b68f2296d23472b546ad4932a7378d690e77a28b8ca2fc"
I1205 00:05:02.898798 221677 logs.go:123] Gathering logs for kube-scheduler [627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6] ...
I1205 00:05:02.898833 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 627743b6ff45c925d6a310a5cb37c83c9de733144a7ce3f9f6f2c51cf4ecc1b6"
I1205 00:05:02.958129 221677 logs.go:123] Gathering logs for kube-proxy [80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c] ...
I1205 00:05:02.958159 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80806f46db52c047578809450ca58fa5d68465e3e7bde747d8bc2ac90f853b5c"
I1205 00:05:03.007046 221677 logs.go:123] Gathering logs for kube-controller-manager [8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654] ...
I1205 00:05:03.007080 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8157194dd364a2edb8ef0c993eeae9a976b861e850df26fa7f871ad7220eb654"
I1205 00:05:03.091201 221677 logs.go:123] Gathering logs for kindnet [fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be] ...
I1205 00:05:03.091285 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb3000597fb3b485734ba2c003b7adc69ca8a7407665d85928b2b1d6f35f30be"
I1205 00:05:03.133446 221677 logs.go:123] Gathering logs for storage-provisioner [473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8] ...
I1205 00:05:03.133475 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 473f35e6815bd58956da775f2a5d16c191b82ced05ebabc19a094cc58a3ca2c8"
I1205 00:05:03.171656 221677 logs.go:123] Gathering logs for dmesg ...
I1205 00:05:03.171685 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1205 00:05:03.188416 221677 logs.go:123] Gathering logs for etcd [55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778] ...
I1205 00:05:03.188446 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55937f828e9685c3c9eb02bc74ae0f38042cb819a0ec1e6ce23edd0df1e81778"
I1205 00:05:03.240042 221677 logs.go:123] Gathering logs for kubernetes-dashboard [317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393] ...
I1205 00:05:03.240071 221677 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 317b6a3b70b882ab37df36359e083cdb3dc3ec422bf760c40e82f879fc21b393"
I1205 00:05:03.302489 221677 out.go:358] Setting ErrFile to fd 2...
I1205 00:05:03.302514 221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1205 00:05:03.302593 221677 out.go:270] X Problems detected in kubelet:
W1205 00:05:03.302608 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480506 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:05:03.302619 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.480651 658 reflector.go:561] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node 'no-preload-013030' and this object
W1205 00:05:03.302645 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.480748 658 reflector.go:158] "Unhandled Error" err="object-\"default\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"default\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
W1205 00:05:03.302653 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: W1205 00:00:35.488143 658 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:no-preload-013030" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'no-preload-013030' and this object
W1205 00:05:03.302659 221677 out.go:270] Dec 05 00:00:35 no-preload-013030 kubelet[658]: E1205 00:00:35.488360 658 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:no-preload-013030\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-013030' and this object" logger="UnhandledError"
I1205 00:05:03.302664 221677 out.go:358] Setting ErrFile to fd 2...
I1205 00:05:03.302670 221677 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 00:05:09.194046 216030 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1205 00:05:09.205203 216030 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1205 00:05:09.208473 216030 out.go:201]
W1205 00:05:09.210824 216030 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1205 00:05:09.210861 216030 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W1205 00:05:09.210876 216030 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W1205 00:05:09.210882 216030 out.go:270] *
W1205 00:05:09.211748 216030 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1205 00:05:09.215055 216030 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
e35397ca54a10 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 88a4f6f081a06 dashboard-metrics-scraper-8d5bb5db8-z9qx4
61ffbe5238187 ba04bb24b9575 5 minutes ago Running storage-provisioner 2 7890e490c1e7d storage-provisioner
eadd97cb808fe 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 9f6eaac45e6b1 kubernetes-dashboard-cd95d586-lgvv5
2be3a6e2ebc5b 1611cd07b61d5 5 minutes ago Running busybox 1 7f7f7151c29d7 busybox
9c39da019cfbd 55b97e1cbb2a3 5 minutes ago Running kindnet-cni 1 af5717e323083 kindnet-k6vqq
cf535a7a2872e ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 7890e490c1e7d storage-provisioner
355e63aab5c7c 25a5233254979 5 minutes ago Running kube-proxy 1 dc4078ab288f4 kube-proxy-xh97b
18e042e221094 db91994f4ee8f 5 minutes ago Running coredns 1 4c9f69be9184e coredns-74ff55c5b-vb8kf
4576860463a38 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 8b0356ea09a06 kube-scheduler-old-k8s-version-066167
d730ecbd86d8e 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 71daf0b305a43 kube-apiserver-old-k8s-version-066167
d9b089970902b 05b738aa1bc63 6 minutes ago Running etcd 1 f66b8154afe5d etcd-old-k8s-version-066167
0c57ea5d02a99 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 8da4499d8fc97 kube-controller-manager-old-k8s-version-066167
77680a8421a1a 1611cd07b61d5 6 minutes ago Exited busybox 0 ff60d0688efb6 busybox
9e6a318a81516 db91994f4ee8f 7 minutes ago Exited coredns 0 fe88c9a9aa083 coredns-74ff55c5b-vb8kf
3bd39d78282b9 55b97e1cbb2a3 7 minutes ago Exited kindnet-cni 0 0242dfcaa2f5b kindnet-k6vqq
f9c9b2c0e523b 25a5233254979 7 minutes ago Exited kube-proxy 0 fd396678ab46a kube-proxy-xh97b
cc6be8b93da47 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 1884db4f94ffc kube-controller-manager-old-k8s-version-066167
05ccefe05793d e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 9fa098d318588 kube-scheduler-old-k8s-version-066167
138be331ccdd2 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 6198991f7b4d9 kube-apiserver-old-k8s-version-066167
03a793869f775 05b738aa1bc63 8 minutes ago Exited etcd 0 78e47ed3d4723 etcd-old-k8s-version-066167
==> containerd <==
Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.189207482Z" level=info msg="CreateContainer within sandbox \"88a4f6f081a06eefb4d38ffff9384604cfd6ef36de26217ec3d6b89ee3c04d91\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\""
Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.191390761Z" level=info msg="StartContainer for \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\""
Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.282523237Z" level=info msg="StartContainer for \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\" returns successfully"
Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.309489498Z" level=info msg="shim disconnected" id=e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3 namespace=k8s.io
Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.309556228Z" level=warning msg="cleaning up after shim disconnected" id=e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3 namespace=k8s.io
Dec 05 00:01:21 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:21.309568339Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 05 00:01:22 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:22.085148265Z" level=info msg="RemoveContainer for \"28d12bf2326887bbf15f7098954f0db9d334df920bfcaa91b02887c4a7151cfa\""
Dec 05 00:01:22 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:01:22.091842859Z" level=info msg="RemoveContainer for \"28d12bf2326887bbf15f7098954f0db9d334df920bfcaa91b02887c4a7151cfa\" returns successfully"
Dec 05 00:02:04 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:04.169715770Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:02:04 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:04.177472631Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Dec 05 00:02:04 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:04.179385389Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Dec 05 00:02:04 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:04.179475807Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.178864306Z" level=info msg="CreateContainer within sandbox \"88a4f6f081a06eefb4d38ffff9384604cfd6ef36de26217ec3d6b89ee3c04d91\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.213737565Z" level=info msg="CreateContainer within sandbox \"88a4f6f081a06eefb4d38ffff9384604cfd6ef36de26217ec3d6b89ee3c04d91\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0\""
Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.214663140Z" level=info msg="StartContainer for \"e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0\""
Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.321629198Z" level=info msg="StartContainer for \"e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0\" returns successfully"
Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.367523756Z" level=info msg="shim disconnected" id=e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0 namespace=k8s.io
Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.367789980Z" level=warning msg="cleaning up after shim disconnected" id=e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0 namespace=k8s.io
Dec 05 00:02:42 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:42.367893107Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 05 00:02:43 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:43.351059251Z" level=info msg="RemoveContainer for \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\""
Dec 05 00:02:43 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:02:43.358671041Z" level=info msg="RemoveContainer for \"e48b97d70003bac3f23d7a9f0df66f9ce362277b33c7b95c459922362d4db5c3\" returns successfully"
Dec 05 00:04:53 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:04:53.169453417Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:53 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:04:53.178302311Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Dec 05 00:04:53 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:04:53.180061990Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Dec 05 00:04:53 old-k8s-version-066167 containerd[567]: time="2024-12-05T00:04:53.180155386Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [18e042e221094225deaf8540656402b9d6fcc1c13a1da51de90c09be1d3171da] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:57304 - 21009 "HINFO IN 2296865237645504130.6773721313697179276. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028776707s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I1204 23:59:53.678335 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-04 23:59:23.677696418 +0000 UTC m=+0.095812585) (total time: 30.000512524s):
Trace[2019727887]: [30.000512524s] [30.000512524s] END
E1204 23:59:53.678381 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I1204 23:59:53.678836 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-04 23:59:23.678432987 +0000 UTC m=+0.096549187) (total time: 30.000384597s):
Trace[939984059]: [30.000384597s] [30.000384597s] END
E1204 23:59:53.678967 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I1204 23:59:53.679127 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-12-04 23:59:23.678728941 +0000 UTC m=+0.096845116) (total time: 30.000385946s):
Trace[911902081]: [30.000385946s] [30.000385946s] END
E1204 23:59:53.679201 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [9e6a318a81516a280c231fc6ccbd521ce3b36c966b7c6128149f692e33b3343c] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:35542 - 45034 "HINFO IN 8601463055701714850.6186383635598761622. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021848409s
==> describe nodes <==
Name: old-k8s-version-066167
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-066167
kubernetes.io/os=linux
minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
minikube.k8s.io/name=old-k8s-version-066167
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_12_04T23_56_54_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 04 Dec 2024 23:56:50 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-066167
AcquireTime: <unset>
RenewTime: Thu, 05 Dec 2024 00:05:04 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 05 Dec 2024 00:00:15 +0000 Wed, 04 Dec 2024 23:56:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 05 Dec 2024 00:00:15 +0000 Wed, 04 Dec 2024 23:56:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 05 Dec 2024 00:00:15 +0000 Wed, 04 Dec 2024 23:56:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 05 Dec 2024 00:00:15 +0000 Wed, 04 Dec 2024 23:57:10 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-066167
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 63394a44c16444a7a3bcf859a64f3a4b
System UUID: 1119a8be-28eb-41ec-878c-8018329a0e7b
Boot ID: a4788b5f-5e14-4e80-9d00-4606b5d89fd6
Kernel Version: 5.15.0-1072-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m43s
kube-system coredns-74ff55c5b-vb8kf 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m1s
kube-system etcd-old-k8s-version-066167 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m9s
kube-system kindnet-k6vqq 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m1s
kube-system kube-apiserver-old-k8s-version-066167 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m9s
kube-system kube-controller-manager-old-k8s-version-066167 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m9s
kube-system kube-proxy-xh97b 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m1s
kube-system kube-scheduler-old-k8s-version-066167 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m9s
kube-system metrics-server-9975d5f86-ksvdj 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m32s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m59s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-z9qx4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m31s
kubernetes-dashboard kubernetes-dashboard-cd95d586-lgvv5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m31s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m28s (x5 over 8m28s) kubelet Node old-k8s-version-066167 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m28s (x4 over 8m28s) kubelet Node old-k8s-version-066167 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m28s (x4 over 8m28s) kubelet Node old-k8s-version-066167 status is now: NodeHasSufficientPID
Normal Starting 8m10s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m10s kubelet Node old-k8s-version-066167 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m10s kubelet Node old-k8s-version-066167 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m10s kubelet Node old-k8s-version-066167 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m9s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m1s kubelet Node old-k8s-version-066167 status is now: NodeReady
Normal Starting 8m kube-proxy Starting kube-proxy.
Normal Starting 6m3s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m3s (x8 over 6m3s) kubelet Node old-k8s-version-066167 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m3s (x8 over 6m3s) kubelet Node old-k8s-version-066167 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m3s (x7 over 6m3s) kubelet Node old-k8s-version-066167 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m3s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m47s kube-proxy Starting kube-proxy.
==> dmesg <==
[Dec 4 22:17] ACPI: SRAT not present
[ +0.000000] ACPI: SRAT not present
[ +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
[ +0.014613] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
[ +0.436388] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
[ +0.024772] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
[ +0.027958] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.026858] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
[ +0.650973] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
[ +6.181261] kauditd_printk_skb: 36 callbacks suppressed
[Dec 4 23:48] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
[Dec 5 00:00] hrtimer: interrupt took 8247705 ns
==> etcd [03a793869f7756988694ffec8aac13a14b390de303baa4c125c0cd29db84fa2e] <==
2024-12-04 23:56:43.855411 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
2024-12-04 23:56:43.855460 I | embed: listening for peers on 192.168.76.2:2380
raft2024/12/04 23:56:43 INFO: ea7e25599daad906 is starting a new election at term 1
raft2024/12/04 23:56:43 INFO: ea7e25599daad906 became candidate at term 2
raft2024/12/04 23:56:43 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2024/12/04 23:56:43 INFO: ea7e25599daad906 became leader at term 2
raft2024/12/04 23:56:43 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2024-12-04 23:56:43.903346 I | etcdserver: setting up the initial cluster version to 3.4
2024-12-04 23:56:43.904283 N | etcdserver/membership: set the initial cluster version to 3.4
2024-12-04 23:56:43.904329 I | etcdserver/api: enabled capabilities for version 3.4
2024-12-04 23:56:43.904368 I | etcdserver: published {Name:old-k8s-version-066167 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2024-12-04 23:56:43.904407 I | embed: ready to serve client requests
2024-12-04 23:56:43.905904 I | embed: serving client requests on 192.168.76.2:2379
2024-12-04 23:56:43.906172 I | embed: ready to serve client requests
2024-12-04 23:56:43.907324 I | embed: serving client requests on 127.0.0.1:2379
2024-12-04 23:57:07.093887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:57:16.521857 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:57:26.522077 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:57:36.522112 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:57:46.522195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:57:56.522862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:58:06.522337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:58:16.522029 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:58:26.522160 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-04 23:58:36.521975 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [d9b089970902b94af71001bf79cb564747ee239b08f7a6d115123ef30278d716] <==
2024-12-05 00:01:09.880500 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:01:19.880362 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:01:29.880399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:01:39.880494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:01:49.880592 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:01:59.880539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:02:09.880356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:02:19.880499 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:02:29.880348 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:02:39.880433 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:02:49.880359 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:02:59.880449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:03:09.880502 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:03:19.880527 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:03:29.880391 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:03:39.880419 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:03:49.880422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:03:59.880551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:04:09.880381 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:04:19.880496 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:04:29.880380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:04:39.880318 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:04:49.880547 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:04:59.880543 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-05 00:05:09.880568 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
00:05:11 up 1:47, 0 users, load average: 2.40, 2.88, 2.81
Linux old-k8s-version-066167 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [3bd39d78282b9197f8a1e23a5f24b93b9c0f66a830c882fa704ff6fd29a43c1d] <==
I1204 23:57:13.417500 1 main.go:148] setting mtu 1500 for CNI
I1204 23:57:13.417517 1 main.go:178] kindnetd IP family: "ipv4"
I1204 23:57:13.417534 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I1204 23:57:13.809731 1 controller.go:361] Starting controller kube-network-policies
I1204 23:57:13.810149 1 controller.go:365] Waiting for informer caches to sync
I1204 23:57:13.810263 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I1204 23:57:14.014076 1 shared_informer.go:320] Caches are synced for kube-network-policies
I1204 23:57:14.014105 1 metrics.go:61] Registering metrics
I1204 23:57:14.014379 1 controller.go:401] Syncing nftables rules
I1204 23:57:23.809571 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1204 23:57:23.809706 1 main.go:301] handling current node
I1204 23:57:33.809999 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1204 23:57:33.810205 1 main.go:301] handling current node
I1204 23:57:43.818780 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1204 23:57:43.818813 1 main.go:301] handling current node
I1204 23:57:53.813449 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1204 23:57:53.813481 1 main.go:301] handling current node
I1204 23:58:03.810116 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1204 23:58:03.810171 1 main.go:301] handling current node
I1204 23:58:13.810035 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1204 23:58:13.810135 1 main.go:301] handling current node
I1204 23:58:23.812509 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1204 23:58:23.812597 1 main.go:301] handling current node
I1204 23:58:33.809527 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1204 23:58:33.809622 1 main.go:301] handling current node
==> kindnet [9c39da019cfbde5f487029c239ec5e7a7cf50deb98daa7d4409a7158f166f0ae] <==
I1205 00:03:06.118129 1 main.go:301] handling current node
I1205 00:03:16.110260 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:03:16.110296 1 main.go:301] handling current node
I1205 00:03:26.110529 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:03:26.110568 1 main.go:301] handling current node
I1205 00:03:36.118125 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:03:36.118162 1 main.go:301] handling current node
I1205 00:03:46.118155 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:03:46.118196 1 main.go:301] handling current node
I1205 00:03:56.118159 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:03:56.118193 1 main.go:301] handling current node
I1205 00:04:06.116282 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:04:06.116335 1 main.go:301] handling current node
I1205 00:04:16.113050 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:04:16.113088 1 main.go:301] handling current node
I1205 00:04:26.109817 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:04:26.109852 1 main.go:301] handling current node
I1205 00:04:36.114087 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:04:36.114124 1 main.go:301] handling current node
I1205 00:04:46.114628 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:04:46.114667 1 main.go:301] handling current node
I1205 00:04:56.117261 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:04:56.117295 1 main.go:301] handling current node
I1205 00:05:06.117234 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I1205 00:05:06.117269 1 main.go:301] handling current node
==> kube-apiserver [138be331ccdd2b57628d183f07860ec4afa6d79e2124a4a4e250a240d9cc18a8] <==
I1204 23:56:51.367354 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1204 23:56:51.367383 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1204 23:56:51.378085 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1204 23:56:51.382826 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1204 23:56:51.382850 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1204 23:56:51.864208 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1204 23:56:51.914540 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1204 23:56:52.018382 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I1204 23:56:52.019763 1 controller.go:606] quota admission added evaluator for: endpoints
I1204 23:56:52.024982 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1204 23:56:53.073092 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1204 23:56:53.368536 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1204 23:56:53.430567 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1204 23:57:01.869049 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1204 23:57:10.226010 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1204 23:57:10.231944 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1204 23:57:23.496853 1 client.go:360] parsed scheme: "passthrough"
I1204 23:57:23.496914 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1204 23:57:23.496922 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1204 23:58:04.681904 1 client.go:360] parsed scheme: "passthrough"
I1204 23:58:04.681948 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1204 23:58:04.681957 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1204 23:58:35.536497 1 client.go:360] parsed scheme: "passthrough"
I1204 23:58:35.536561 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1204 23:58:35.536570 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [d730ecbd86d8ef0d4804fcf0250ea38521124bf689355deae24fb129d13290a7] <==
I1205 00:01:10.616841 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1205 00:01:10.616852 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1205 00:01:43.793553 1 client.go:360] parsed scheme: "passthrough"
I1205 00:01:43.793636 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1205 00:01:43.793646 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1205 00:02:25.154125 1 client.go:360] parsed scheme: "passthrough"
I1205 00:02:25.154171 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1205 00:02:25.154179 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1205 00:02:25.392155 1 handler_proxy.go:102] no RequestInfo found in the context
E1205 00:02:25.392230 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1205 00:02:25.392245 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1205 00:03:09.221663 1 client.go:360] parsed scheme: "passthrough"
I1205 00:03:09.221707 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1205 00:03:09.221715 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1205 00:03:50.421710 1 client.go:360] parsed scheme: "passthrough"
I1205 00:03:50.421761 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1205 00:03:50.421770 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1205 00:04:22.869903 1 handler_proxy.go:102] no RequestInfo found in the context
E1205 00:04:22.870001 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1205 00:04:22.870014 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1205 00:04:32.361912 1 client.go:360] parsed scheme: "passthrough"
I1205 00:04:32.361957 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1205 00:04:32.361965 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [0c57ea5d02a99eb0c8e92d3c181d445e9d90ba4f8cc1819c695d87d293de3196] <==
W1205 00:00:46.528820 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1205 00:01:12.575220 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1205 00:01:18.179416 1 request.go:655] Throttling request took 1.048410278s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
W1205 00:01:19.031149 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1205 00:01:43.077409 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1205 00:01:50.681678 1 request.go:655] Throttling request took 1.04855418s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W1205 00:01:51.533140 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1205 00:02:13.579363 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1205 00:02:23.183664 1 request.go:655] Throttling request took 1.047807439s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1205 00:02:24.035209 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1205 00:02:44.081609 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1205 00:02:55.685732 1 request.go:655] Throttling request took 1.0477809s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1205 00:02:56.537503 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1205 00:03:14.583446 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1205 00:03:28.187792 1 request.go:655] Throttling request took 1.047765071s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W1205 00:03:29.039255 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1205 00:03:45.086459 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1205 00:04:00.689645 1 request.go:655] Throttling request took 1.048234915s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W1205 00:04:01.542110 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1205 00:04:15.588387 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1205 00:04:33.192968 1 request.go:655] Throttling request took 1.048389978s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
W1205 00:04:34.078846 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1205 00:04:46.090421 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1205 00:05:05.729246 1 request.go:655] Throttling request took 1.04854089s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
W1205 00:05:06.580789 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [cc6be8b93da4734afb8721deddc5dd86d31687bef86e4f4bbee6e8885d7eeb15] <==
I1204 23:57:10.228139 1 shared_informer.go:247] Caches are synced for TTL
I1204 23:57:10.244736 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-066167" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1204 23:57:10.293607 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I1204 23:57:10.316660 1 shared_informer.go:247] Caches are synced for resource quota
I1204 23:57:10.317067 1 shared_informer.go:247] Caches are synced for resource quota
I1204 23:57:10.321571 1 shared_informer.go:247] Caches are synced for persistent volume
I1204 23:57:10.325353 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k6vqq"
I1204 23:57:10.325552 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-xh97b"
I1204 23:57:10.325661 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vb8kf"
I1204 23:57:10.350178 1 shared_informer.go:247] Caches are synced for expand
I1204 23:57:10.351214 1 shared_informer.go:247] Caches are synced for PV protection
I1204 23:57:10.404370 1 shared_informer.go:247] Caches are synced for attach detach
I1204 23:57:10.443160 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vgvw4"
I1204 23:57:10.452896 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
E1204 23:57:10.572380 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"a57a36c8-4b4a-4b45-8bb9-ec5b0cc99311", ResourceVersion:"398", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63868953414, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241023-a345ebe4\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001e2e660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001e2e680)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001e2e6a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001e2e6c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001e2e6e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e2e700), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e2e720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e2e740), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241023-a345ebe4", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001e2e760)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001e2e7a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001e20720), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001de92c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000175880), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000167218)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001de9310)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I1204 23:57:10.582567 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1204 23:57:10.849796 1 shared_informer.go:247] Caches are synced for garbage collector
I1204 23:57:10.849825 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1204 23:57:10.885014 1 shared_informer.go:247] Caches are synced for garbage collector
I1204 23:57:11.800291 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I1204 23:57:11.843210 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-vgvw4"
I1204 23:57:15.206454 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I1204 23:58:38.746036 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E1204 23:58:39.050155 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E1204 23:58:39.129164 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [355e63aab5c7ca3f897f0a3185163e3c74ba8b9a3d0ac4ea0d8b36a43184be2f] <==
I1204 23:59:24.720336 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1204 23:59:24.720406 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1204 23:59:24.887906 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1204 23:59:24.888168 1 server_others.go:185] Using iptables Proxier.
I1204 23:59:24.914318 1 server.go:650] Version: v1.20.0
I1204 23:59:24.935301 1 config.go:315] Starting service config controller
I1204 23:59:24.935318 1 shared_informer.go:240] Waiting for caches to sync for service config
I1204 23:59:24.935339 1 config.go:224] Starting endpoint slice config controller
I1204 23:59:24.935343 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1204 23:59:25.051395 1 shared_informer.go:247] Caches are synced for service config
I1204 23:59:25.173242 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [f9c9b2c0e523b603269fe6659c30d587bc52d271158b2673837ef5e1aee00c88] <==
I1204 23:57:11.507069 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1204 23:57:11.507165 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1204 23:57:11.538723 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1204 23:57:11.538818 1 server_others.go:185] Using iptables Proxier.
I1204 23:57:11.539152 1 server.go:650] Version: v1.20.0
I1204 23:57:11.539974 1 config.go:315] Starting service config controller
I1204 23:57:11.539988 1 shared_informer.go:240] Waiting for caches to sync for service config
I1204 23:57:11.540006 1 config.go:224] Starting endpoint slice config controller
I1204 23:57:11.540010 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1204 23:57:11.640113 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1204 23:57:11.640187 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [05ccefe05793d3051a4ddab3cb0a4cbc518101e0565811c7a308fd2c65216fe0] <==
W1204 23:56:50.567444 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1204 23:56:50.632475 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 23:56:50.632669 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 23:56:50.641571 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1204 23:56:50.652126 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1204 23:56:50.660633 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1204 23:56:50.666496 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1204 23:56:50.666852 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1204 23:56:50.667052 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1204 23:56:50.667317 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1204 23:56:50.667557 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1204 23:56:50.667807 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1204 23:56:50.667887 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1204 23:56:50.671742 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1204 23:56:50.671919 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1204 23:56:50.672048 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1204 23:56:50.672178 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1204 23:56:51.547484 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1204 23:56:51.604108 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1204 23:56:51.604460 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1204 23:56:51.618354 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1204 23:56:51.652239 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1204 23:56:51.657470 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1204 23:56:51.704258 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I1204 23:56:54.832918 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [4576860463a38520db97290a77379fececb4c291a8547c0cb354a6f44b20cd30] <==
I1204 23:59:14.260397 1 serving.go:331] Generated self-signed cert in-memory
W1204 23:59:21.821955 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1204 23:59:21.821981 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1204 23:59:21.821990 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1204 23:59:21.821995 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1204 23:59:22.005867 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1204 23:59:22.014441 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1204 23:59:22.014493 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 23:59:22.026179 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1204 23:59:22.227280 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Dec 05 00:03:36 old-k8s-version-066167 kubelet[664]: E1205 00:03:36.169303 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: I1205 00:03:49.168556 664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.168901 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:03:49 old-k8s-version-066167 kubelet[664]: E1205 00:03:49.170002 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: I1205 00:04:01.168532 664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169436 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:01 old-k8s-version-066167 kubelet[664]: E1205 00:04:01.169654 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: I1205 00:04:14.168490 664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
Dec 05 00:04:14 old-k8s-version-066167 kubelet[664]: E1205 00:04:14.169276 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:15 old-k8s-version-066167 kubelet[664]: E1205 00:04:15.169312 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: I1205 00:04:25.168400 664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
Dec 05 00:04:25 old-k8s-version-066167 kubelet[664]: E1205 00:04:25.168769 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:29 old-k8s-version-066167 kubelet[664]: E1205 00:04:29.169441 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: I1205 00:04:36.172319 664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
Dec 05 00:04:36 old-k8s-version-066167 kubelet[664]: E1205 00:04:36.172643 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:42 old-k8s-version-066167 kubelet[664]: E1205 00:04:42.172818 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: I1205 00:04:51.168499 664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
Dec 05 00:04:51 old-k8s-version-066167 kubelet[664]: E1205 00:04:51.168842 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.180383 664 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.180456 664 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181002 664 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-7rsv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-ksvdj_kube-system(daace8e
a-0220-4827-83d0-a829c2b20a57): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Dec 05 00:04:53 old-k8s-version-066167 kubelet[664]: E1205 00:04:53.181049 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Dec 05 00:05:06 old-k8s-version-066167 kubelet[664]: I1205 00:05:06.168524 664 scope.go:95] [topologymanager] RemoveContainer - Container ID: e35397ca54a102a73770730d095449df8cc3d2228d1a0cbd45789049bd855aa0
Dec 05 00:05:06 old-k8s-version-066167 kubelet[664]: E1205 00:05:06.169386 664 pod_workers.go:191] Error syncing pod cf5deac0-c718-4fbe-9210-072a20433ee2 ("dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-z9qx4_kubernetes-dashboard(cf5deac0-c718-4fbe-9210-072a20433ee2)"
Dec 05 00:05:07 old-k8s-version-066167 kubelet[664]: E1205 00:05:07.169063 664 pod_workers.go:191] Error syncing pod daace8ea-0220-4827-83d0-a829c2b20a57 ("metrics-server-9975d5f86-ksvdj_kube-system(daace8ea-0220-4827-83d0-a829c2b20a57)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [eadd97cb808fe08fdb4aaef8da6368ee15002754164453ddd4a8f5af6308549e] <==
2024/12/04 23:59:49 Starting overwatch
2024/12/04 23:59:49 Using namespace: kubernetes-dashboard
2024/12/04 23:59:49 Using in-cluster config to connect to apiserver
2024/12/04 23:59:49 Using secret token for csrf signing
2024/12/04 23:59:49 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/12/04 23:59:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/12/04 23:59:49 Successful initial request to the apiserver, version: v1.20.0
2024/12/04 23:59:49 Generating JWE encryption key
2024/12/04 23:59:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/12/04 23:59:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/12/04 23:59:49 Initializing JWE encryption key from synchronized object
2024/12/04 23:59:49 Creating in-cluster Sidecar client
2024/12/04 23:59:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/04 23:59:49 Serving insecurely on HTTP port: 9090
2024/12/05 00:00:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:00:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:01:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:01:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:02:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:02:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:03:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:03:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:04:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/05 00:04:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [61ffbe5238187100a841dc8790386e7d43465ae9fc1221cf1ab32e059e4cdb4b] <==
I1205 00:00:10.468392 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1205 00:00:10.526019 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1205 00:00:10.526220 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1205 00:00:28.091165 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1205 00:00:28.093949 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-066167_ff66112d-c4a2-4229-897e-56fd3d5df8b6!
I1205 00:00:28.100964 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"33f61396-308b-4034-a659-b486fa025384", APIVersion:"v1", ResourceVersion:"834", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-066167_ff66112d-c4a2-4229-897e-56fd3d5df8b6 became leader
I1205 00:00:28.194971 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-066167_ff66112d-c4a2-4229-897e-56fd3d5df8b6!
==> storage-provisioner [cf535a7a2872e76c4d36b0329b3244279961365b25cdfda7c7960443118286cf] <==
I1204 23:59:24.782347 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1204 23:59:54.784716 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-066167 -n old-k8s-version-066167
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-066167 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-ksvdj
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-066167 describe pod metrics-server-9975d5f86-ksvdj
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-066167 describe pod metrics-server-9975d5f86-ksvdj: exit status 1 (101.40223ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-ksvdj" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-066167 describe pod metrics-server-9975d5f86-ksvdj: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.70s)