=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-623695 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
E1209 11:25:27.567617 592080 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/addons-764596/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-623695 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m8.842301344s)
-- stdout --
* [old-k8s-version-623695] minikube v1.34.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20068
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-623695" primary control-plane node in "old-k8s-version-623695" cluster
* Pulling base image v0.0.45-1730888964-19917 ...
* Restarting existing docker container for "old-k8s-version-623695" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
* Verifying Kubernetes components...
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-623695 addons enable metrics-server
* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
-- /stdout --
** stderr **
I1209 11:25:22.128588 800461 out.go:345] Setting OutFile to fd 1 ...
I1209 11:25:22.128844 800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 11:25:22.128872 800461 out.go:358] Setting ErrFile to fd 2...
I1209 11:25:22.128894 800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 11:25:22.129203 800461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
I1209 11:25:22.129651 800461 out.go:352] Setting JSON to false
I1209 11:25:22.130728 800461 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14870,"bootTime":1733728653,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1209 11:25:22.130832 800461 start.go:139] virtualization:
I1209 11:25:22.134723 800461 out.go:177] * [old-k8s-version-623695] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1209 11:25:22.137187 800461 notify.go:220] Checking for updates...
I1209 11:25:22.138119 800461 out.go:177] - MINIKUBE_LOCATION=20068
I1209 11:25:22.140125 800461 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 11:25:22.144142 800461 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
I1209 11:25:22.146573 800461 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
I1209 11:25:22.148989 800461 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1209 11:25:22.151127 800461 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1209 11:25:22.154138 800461 config.go:182] Loaded profile config "old-k8s-version-623695": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1209 11:25:22.156633 800461 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
I1209 11:25:22.158810 800461 driver.go:394] Setting default libvirt URI to qemu:///system
I1209 11:25:22.192929 800461 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1209 11:25:22.193109 800461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 11:25:22.281953 800461 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-09 11:25:22.267248146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1209 11:25:22.282099 800461 docker.go:318] overlay module found
I1209 11:25:22.284719 800461 out.go:177] * Using the docker driver based on existing profile
I1209 11:25:22.286823 800461 start.go:297] selected driver: docker
I1209 11:25:22.286847 800461 start.go:901] validating driver "docker" against &{Name:old-k8s-version-623695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 11:25:22.286960 800461 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 11:25:22.287670 800461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 11:25:22.366631 800461 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:67 SystemTime:2024-12-09 11:25:22.354752609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1209 11:25:22.367065 800461 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 11:25:22.367084 800461 cni.go:84] Creating CNI manager for ""
I1209 11:25:22.367151 800461 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 11:25:22.367210 800461 start.go:340] cluster config:
{Name:old-k8s-version-623695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 11:25:22.369677 800461 out.go:177] * Starting "old-k8s-version-623695" primary control-plane node in "old-k8s-version-623695" cluster
I1209 11:25:22.371827 800461 cache.go:121] Beginning downloading kic base image for docker with containerd
I1209 11:25:22.374004 800461 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
I1209 11:25:22.376214 800461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1209 11:25:22.376276 800461 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I1209 11:25:22.376285 800461 cache.go:56] Caching tarball of preloaded images
I1209 11:25:22.376386 800461 preload.go:172] Found /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1209 11:25:22.376396 800461 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I1209 11:25:22.376518 800461 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/config.json ...
I1209 11:25:22.376739 800461 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
I1209 11:25:22.418696 800461 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
I1209 11:25:22.418723 800461 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
I1209 11:25:22.418739 800461 cache.go:194] Successfully downloaded all kic artifacts
I1209 11:25:22.418774 800461 start.go:360] acquireMachinesLock for old-k8s-version-623695: {Name:mk30ad5946677ce9584302a554d89e2bca295e92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 11:25:22.418847 800461 start.go:364] duration metric: took 44.161µs to acquireMachinesLock for "old-k8s-version-623695"
I1209 11:25:22.418876 800461 start.go:96] Skipping create...Using existing machine configuration
I1209 11:25:22.418887 800461 fix.go:54] fixHost starting:
I1209 11:25:22.419173 800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
I1209 11:25:22.471000 800461 fix.go:112] recreateIfNeeded on old-k8s-version-623695: state=Stopped err=<nil>
W1209 11:25:22.471035 800461 fix.go:138] unexpected machine state, will restart: <nil>
I1209 11:25:22.473650 800461 out.go:177] * Restarting existing docker container for "old-k8s-version-623695" ...
I1209 11:25:22.475782 800461 cli_runner.go:164] Run: docker start old-k8s-version-623695
I1209 11:25:22.850233 800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
I1209 11:25:22.877762 800461 kic.go:430] container "old-k8s-version-623695" state is running.
I1209 11:25:22.878168 800461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-623695
I1209 11:25:22.902787 800461 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/config.json ...
I1209 11:25:22.903009 800461 machine.go:93] provisionDockerMachine start ...
I1209 11:25:22.903071 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:22.922309 800461 main.go:141] libmachine: Using SSH client type: native
I1209 11:25:22.922579 800461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33802 <nil> <nil>}
I1209 11:25:22.922589 800461 main.go:141] libmachine: About to run SSH command:
hostname
I1209 11:25:22.925826 800461 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I1209 11:25:26.077094 800461 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-623695
I1209 11:25:26.077206 800461 ubuntu.go:169] provisioning hostname "old-k8s-version-623695"
I1209 11:25:26.077313 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:26.107019 800461 main.go:141] libmachine: Using SSH client type: native
I1209 11:25:26.107285 800461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33802 <nil> <nil>}
I1209 11:25:26.107296 800461 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-623695 && echo "old-k8s-version-623695" | sudo tee /etc/hostname
I1209 11:25:26.281754 800461 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-623695
I1209 11:25:26.281845 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:26.315039 800461 main.go:141] libmachine: Using SSH client type: native
I1209 11:25:26.315309 800461 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33802 <nil> <nil>}
I1209 11:25:26.315333 800461 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-623695' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-623695/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-623695' | sudo tee -a /etc/hosts;
fi
fi
I1209 11:25:26.465174 800461 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1209 11:25:26.465203 800461 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20068-586689/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-586689/.minikube}
I1209 11:25:26.465235 800461 ubuntu.go:177] setting up certificates
I1209 11:25:26.465244 800461 provision.go:84] configureAuth start
I1209 11:25:26.465307 800461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-623695
I1209 11:25:26.498741 800461 provision.go:143] copyHostCerts
I1209 11:25:26.498816 800461 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem, removing ...
I1209 11:25:26.498835 800461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem
I1209 11:25:26.498926 800461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem (1078 bytes)
I1209 11:25:26.499039 800461 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem, removing ...
I1209 11:25:26.499050 800461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem
I1209 11:25:26.499080 800461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem (1123 bytes)
I1209 11:25:26.499147 800461 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem, removing ...
I1209 11:25:26.499157 800461 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem
I1209 11:25:26.499182 800461 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem (1679 bytes)
I1209 11:25:26.499244 800461 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-623695 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-623695]
I1209 11:25:26.757800 800461 provision.go:177] copyRemoteCerts
I1209 11:25:26.757872 800461 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1209 11:25:26.757923 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:26.776079 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:26.869943 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1209 11:25:26.911148 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1209 11:25:26.946806 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1209 11:25:26.992749 800461 provision.go:87] duration metric: took 527.487908ms to configureAuth
I1209 11:25:26.992774 800461 ubuntu.go:193] setting minikube options for container-runtime
I1209 11:25:26.992977 800461 config.go:182] Loaded profile config "old-k8s-version-623695": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1209 11:25:26.992984 800461 machine.go:96] duration metric: took 4.089968262s to provisionDockerMachine
I1209 11:25:26.992992 800461 start.go:293] postStartSetup for "old-k8s-version-623695" (driver="docker")
I1209 11:25:26.993003 800461 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1209 11:25:26.993052 800461 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1209 11:25:26.993096 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:27.027430 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:27.132583 800461 ssh_runner.go:195] Run: cat /etc/os-release
I1209 11:25:27.136238 800461 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1209 11:25:27.136278 800461 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1209 11:25:27.136289 800461 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1209 11:25:27.136297 800461 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1209 11:25:27.136308 800461 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-586689/.minikube/addons for local assets ...
I1209 11:25:27.136365 800461 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-586689/.minikube/files for local assets ...
I1209 11:25:27.136450 800461 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem -> 5920802.pem in /etc/ssl/certs
I1209 11:25:27.136562 800461 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1209 11:25:27.150729 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem --> /etc/ssl/certs/5920802.pem (1708 bytes)
I1209 11:25:27.188672 800461 start.go:296] duration metric: took 195.662785ms for postStartSetup
I1209 11:25:27.188772 800461 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1209 11:25:27.188818 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:27.218791 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:27.318274 800461 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1209 11:25:27.323920 800461 fix.go:56] duration metric: took 4.905026182s for fixHost
I1209 11:25:27.323946 800461 start.go:83] releasing machines lock for "old-k8s-version-623695", held for 4.905086779s
I1209 11:25:27.324059 800461 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-623695
I1209 11:25:27.349005 800461 ssh_runner.go:195] Run: cat /version.json
I1209 11:25:27.349072 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:27.349287 800461 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1209 11:25:27.349352 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:27.381331 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:27.387659 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:27.626453 800461 ssh_runner.go:195] Run: systemctl --version
I1209 11:25:27.631278 800461 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1209 11:25:27.641788 800461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1209 11:25:27.671024 800461 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1209 11:25:27.671173 800461 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1209 11:25:27.682583 800461 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1209 11:25:27.682645 800461 start.go:495] detecting cgroup driver to use...
I1209 11:25:27.682702 800461 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1209 11:25:27.682775 800461 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1209 11:25:27.705704 800461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1209 11:25:27.732685 800461 docker.go:217] disabling cri-docker service (if available) ...
I1209 11:25:27.732799 800461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1209 11:25:27.755963 800461 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1209 11:25:27.779797 800461 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1209 11:25:27.924381 800461 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1209 11:25:28.066666 800461 docker.go:233] disabling docker service ...
I1209 11:25:28.066786 800461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1209 11:25:28.087938 800461 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1209 11:25:28.102978 800461 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1209 11:25:28.269735 800461 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1209 11:25:28.431244 800461 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1209 11:25:28.447421 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1209 11:25:28.481183 800461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1209 11:25:28.496375 800461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1209 11:25:28.513515 800461 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1209 11:25:28.513591 800461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1209 11:25:28.527734 800461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 11:25:28.544122 800461 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1209 11:25:28.558312 800461 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 11:25:28.573775 800461 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1209 11:25:28.585514 800461 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1209 11:25:28.606645 800461 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1209 11:25:28.618251 800461 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1209 11:25:28.632859 800461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 11:25:28.772990 800461 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1209 11:25:29.047953 800461 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1209 11:25:29.048026 800461 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1209 11:25:29.067089 800461 start.go:563] Will wait 60s for crictl version
I1209 11:25:29.067162 800461 ssh_runner.go:195] Run: which crictl
I1209 11:25:29.077827 800461 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1209 11:25:29.131109 800461 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1209 11:25:29.131189 800461 ssh_runner.go:195] Run: containerd --version
I1209 11:25:29.155889 800461 ssh_runner.go:195] Run: containerd --version
I1209 11:25:29.185455 800461 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
I1209 11:25:29.187502 800461 cli_runner.go:164] Run: docker network inspect old-k8s-version-623695 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 11:25:29.202801 800461 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1209 11:25:29.207195 800461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 11:25:29.219268 800461 kubeadm.go:883] updating cluster {Name:old-k8s-version-623695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1209 11:25:29.219400 800461 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1209 11:25:29.219469 800461 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 11:25:29.272172 800461 containerd.go:627] all images are preloaded for containerd runtime.
I1209 11:25:29.272197 800461 containerd.go:534] Images already preloaded, skipping extraction
I1209 11:25:29.272258 800461 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 11:25:29.323035 800461 containerd.go:627] all images are preloaded for containerd runtime.
I1209 11:25:29.323114 800461 cache_images.go:84] Images are preloaded, skipping loading
I1209 11:25:29.323159 800461 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
I1209 11:25:29.323323 800461 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-623695 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1209 11:25:29.323424 800461 ssh_runner.go:195] Run: sudo crictl info
I1209 11:25:29.376480 800461 cni.go:84] Creating CNI manager for ""
I1209 11:25:29.376508 800461 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 11:25:29.376519 800461 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1209 11:25:29.376540 800461 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-623695 NodeName:old-k8s-version-623695 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1209 11:25:29.376688 800461 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-623695"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1209 11:25:29.376756 800461 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I1209 11:25:29.388893 800461 binaries.go:44] Found k8s binaries, skipping transfer
I1209 11:25:29.389071 800461 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1209 11:25:29.399788 800461 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I1209 11:25:29.421347 800461 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1209 11:25:29.443360 800461 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I1209 11:25:29.467336 800461 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1209 11:25:29.471094 800461 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 11:25:29.482451 800461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 11:25:29.636020 800461 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 11:25:29.650941 800461 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695 for IP: 192.168.85.2
I1209 11:25:29.650966 800461 certs.go:194] generating shared ca certs ...
I1209 11:25:29.650982 800461 certs.go:226] acquiring lock for ca certs: {Name:mkf9a6796a1bfe0d2ad344a1e9f65da735c51ff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:25:29.651115 800461 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-586689/.minikube/ca.key
I1209 11:25:29.651171 800461 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.key
I1209 11:25:29.651184 800461 certs.go:256] generating profile certs ...
I1209 11:25:29.651275 800461 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/client.key
I1209 11:25:29.651353 800461 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/apiserver.key.3b2ad64b
I1209 11:25:29.651397 800461 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/proxy-client.key
I1209 11:25:29.651515 800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080.pem (1338 bytes)
W1209 11:25:29.651548 800461 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080_empty.pem, impossibly tiny 0 bytes
I1209 11:25:29.651561 800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem (1679 bytes)
I1209 11:25:29.651592 800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem (1078 bytes)
I1209 11:25:29.651632 800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem (1123 bytes)
I1209 11:25:29.651659 800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem (1679 bytes)
I1209 11:25:29.651710 800461 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem (1708 bytes)
I1209 11:25:29.652368 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1209 11:25:29.691501 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1209 11:25:29.719791 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1209 11:25:29.748398 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1209 11:25:29.775828 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I1209 11:25:29.816141 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1209 11:25:29.893230 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1209 11:25:29.929202 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/old-k8s-version-623695/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1209 11:25:29.970424 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1209 11:25:30.076009 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080.pem --> /usr/share/ca-certificates/592080.pem (1338 bytes)
I1209 11:25:30.138411 800461 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem --> /usr/share/ca-certificates/5920802.pem (1708 bytes)
I1209 11:25:30.182212 800461 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1209 11:25:30.217824 800461 ssh_runner.go:195] Run: openssl version
I1209 11:25:30.224366 800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5920802.pem && ln -fs /usr/share/ca-certificates/5920802.pem /etc/ssl/certs/5920802.pem"
I1209 11:25:30.235595 800461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5920802.pem
I1209 11:25:30.240312 800461 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 9 10:44 /usr/share/ca-certificates/5920802.pem
I1209 11:25:30.240402 800461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5920802.pem
I1209 11:25:30.250143 800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5920802.pem /etc/ssl/certs/3ec20f2e.0"
I1209 11:25:30.261400 800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1209 11:25:30.272969 800461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1209 11:25:30.277712 800461 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 9 10:37 /usr/share/ca-certificates/minikubeCA.pem
I1209 11:25:30.277783 800461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1209 11:25:30.285954 800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1209 11:25:30.296418 800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/592080.pem && ln -fs /usr/share/ca-certificates/592080.pem /etc/ssl/certs/592080.pem"
I1209 11:25:30.307580 800461 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/592080.pem
I1209 11:25:30.311898 800461 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 9 10:44 /usr/share/ca-certificates/592080.pem
I1209 11:25:30.311967 800461 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/592080.pem
I1209 11:25:30.319712 800461 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/592080.pem /etc/ssl/certs/51391683.0"
I1209 11:25:30.330403 800461 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1209 11:25:30.334808 800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1209 11:25:30.346865 800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1209 11:25:30.357888 800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1209 11:25:30.365959 800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1209 11:25:30.375557 800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1209 11:25:30.385627 800461 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1209 11:25:30.393269 800461 kubeadm.go:392] StartCluster: {Name:old-k8s-version-623695 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-623695 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 11:25:30.393398 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1209 11:25:30.393461 800461 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1209 11:25:30.446836 800461 cri.go:89] found id: "484100ecf70c93234ce300e5b905734cece0723a625060c5e6f1e45f273ba13d"
I1209 11:25:30.446865 800461 cri.go:89] found id: "ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
I1209 11:25:30.446872 800461 cri.go:89] found id: "eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
I1209 11:25:30.446876 800461 cri.go:89] found id: "c8dee69e2c3486f5230d08c0860efbe796008eebd4c95c9749003caa1b5e8c95"
I1209 11:25:30.446879 800461 cri.go:89] found id: "a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
I1209 11:25:30.446885 800461 cri.go:89] found id: "25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
I1209 11:25:30.446888 800461 cri.go:89] found id: "0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
I1209 11:25:30.446891 800461 cri.go:89] found id: "8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
I1209 11:25:30.446894 800461 cri.go:89] found id: "2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
I1209 11:25:30.446900 800461 cri.go:89] found id: ""
I1209 11:25:30.446951 800461 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1209 11:25:30.460859 800461 cri.go:116] JSON = null
W1209 11:25:30.460909 800461 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
I1209 11:25:30.460973 800461 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1209 11:25:30.472235 800461 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1209 11:25:30.472259 800461 kubeadm.go:593] restartPrimaryControlPlane start ...
I1209 11:25:30.472321 800461 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1209 11:25:30.488965 800461 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1209 11:25:30.489577 800461 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-623695" does not appear in /home/jenkins/minikube-integration/20068-586689/kubeconfig
I1209 11:25:30.489714 800461 kubeconfig.go:62] /home/jenkins/minikube-integration/20068-586689/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-623695" cluster setting kubeconfig missing "old-k8s-version-623695" context setting]
I1209 11:25:30.490113 800461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/kubeconfig: {Name:mk6f05f318819272b7562cf231de4edaf3cc73af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:25:30.491536 800461 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1209 11:25:30.505640 800461 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
I1209 11:25:30.505680 800461 kubeadm.go:597] duration metric: took 33.414391ms to restartPrimaryControlPlane
I1209 11:25:30.505691 800461 kubeadm.go:394] duration metric: took 112.43652ms to StartCluster
I1209 11:25:30.505707 800461 settings.go:142] acquiring lock: {Name:mk7f755871171984acf41c83b87c2df5d7451702 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:25:30.505766 800461 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20068-586689/kubeconfig
I1209 11:25:30.506466 800461 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/kubeconfig: {Name:mk6f05f318819272b7562cf231de4edaf3cc73af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:25:30.506699 800461 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1209 11:25:30.507074 800461 config.go:182] Loaded profile config "old-k8s-version-623695": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1209 11:25:30.507129 800461 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1209 11:25:30.507215 800461 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-623695"
I1209 11:25:30.507245 800461 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-623695"
W1209 11:25:30.507269 800461 addons.go:243] addon storage-provisioner should already be in state true
I1209 11:25:30.507278 800461 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-623695"
I1209 11:25:30.507305 800461 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-623695"
W1209 11:25:30.507312 800461 addons.go:243] addon metrics-server should already be in state true
I1209 11:25:30.507315 800461 host.go:66] Checking if "old-k8s-version-623695" exists ...
I1209 11:25:30.507336 800461 host.go:66] Checking if "old-k8s-version-623695" exists ...
I1209 11:25:30.507778 800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
I1209 11:25:30.507964 800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
I1209 11:25:30.512154 800461 out.go:177] * Verifying Kubernetes components...
I1209 11:25:30.507255 800461 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-623695"
I1209 11:25:30.512275 800461 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-623695"
I1209 11:25:30.512635 800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
I1209 11:25:30.507267 800461 addons.go:69] Setting dashboard=true in profile "old-k8s-version-623695"
I1209 11:25:30.513350 800461 addons.go:234] Setting addon dashboard=true in "old-k8s-version-623695"
W1209 11:25:30.513362 800461 addons.go:243] addon dashboard should already be in state true
I1209 11:25:30.513392 800461 host.go:66] Checking if "old-k8s-version-623695" exists ...
I1209 11:25:30.513842 800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
I1209 11:25:30.517204 800461 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 11:25:30.556124 800461 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1209 11:25:30.561603 800461 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1209 11:25:30.561630 800461 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1209 11:25:30.561712 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:30.618321 800461 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-623695"
W1209 11:25:30.618346 800461 addons.go:243] addon default-storageclass should already be in state true
I1209 11:25:30.618379 800461 host.go:66] Checking if "old-k8s-version-623695" exists ...
I1209 11:25:30.619190 800461 cli_runner.go:164] Run: docker container inspect old-k8s-version-623695 --format={{.State.Status}}
I1209 11:25:30.620978 800461 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1209 11:25:30.627469 800461 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1209 11:25:30.627596 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:30.629505 800461 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1209 11:25:30.629528 800461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1209 11:25:30.629596 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:30.629759 800461 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1209 11:25:30.631840 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1209 11:25:30.631867 800461 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1209 11:25:30.631933 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:30.674342 800461 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1209 11:25:30.674364 800461 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1209 11:25:30.674431 800461 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-623695
I1209 11:25:30.694711 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:30.695778 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:30.735198 800461 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33802 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/old-k8s-version-623695/id_rsa Username:docker}
I1209 11:25:30.739230 800461 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 11:25:30.757179 800461 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-623695" to be "Ready" ...
I1209 11:25:30.797796 800461 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1209 11:25:30.797818 800461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1209 11:25:30.819397 800461 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1209 11:25:30.819469 800461 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1209 11:25:30.850124 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1209 11:25:30.850150 800461 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1209 11:25:30.856243 800461 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1209 11:25:30.856280 800461 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I1209 11:25:30.862829 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 11:25:30.882128 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1209 11:25:30.891481 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1209 11:25:30.891507 800461 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1209 11:25:30.913370 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 11:25:30.978985 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1209 11:25:30.979013 800461 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1209 11:25:31.111240 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1209 11:25:31.111271 800461 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1209 11:25:31.229244 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1209 11:25:31.229286 800461 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W1209 11:25:31.252380 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 11:25:31.252490 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.252505 800461 retry.go:31] will retry after 312.30868ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 11:25:31.252560 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.252573 800461 retry.go:31] will retry after 195.840828ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.252428 800461 retry.go:31] will retry after 325.758646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.261241 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1209 11:25:31.261324 800461 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1209 11:25:31.282207 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1209 11:25:31.282248 800461 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I1209 11:25:31.302660 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1209 11:25:31.302686 800461 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1209 11:25:31.324912 800461 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1209 11:25:31.324936 800461 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1209 11:25:31.346250 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 11:25:31.448674 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.448707 800461 retry.go:31] will retry after 274.574509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.448832 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 11:25:31.546150 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.546185 800461 retry.go:31] will retry after 194.00489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.565464 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1209 11:25:31.578858 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1209 11:25:31.708671 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.708705 800461 retry.go:31] will retry after 504.690937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.724004 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 11:25:31.740378 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 11:25:31.766130 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.766213 800461 retry.go:31] will retry after 238.761685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 11:25:31.928841 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.928951 800461 retry.go:31] will retry after 521.929992ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 11:25:31.934075 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:31.934168 800461 retry.go:31] will retry after 737.614843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.008427 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1209 11:25:32.097721 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.097753 800461 retry.go:31] will retry after 578.176208ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.214190 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1209 11:25:32.345770 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.345802 800461 retry.go:31] will retry after 592.33363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.451087 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 11:25:32.553635 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.553710 800461 retry.go:31] will retry after 291.196951ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.672013 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 11:25:32.676520 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 11:25:32.758282 800461 node_ready.go:53] error getting node "old-k8s-version-623695": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-623695": dial tcp 192.168.85.2:8443: connect: connection refused
W1209 11:25:32.829287 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.829372 800461 retry.go:31] will retry after 1.210941259s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 11:25:32.829444 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.829471 800461 retry.go:31] will retry after 982.212498ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.845802 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 11:25:32.939226 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1209 11:25:32.979760 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:32.979837 800461 retry.go:31] will retry after 783.958882ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 11:25:33.078133 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:33.078232 800461 retry.go:31] will retry after 601.622997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:33.680139 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1209 11:25:33.764177 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 11:25:33.800730 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:33.800820 800461 retry.go:31] will retry after 1.216118305s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:33.812059 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1209 11:25:33.932049 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:33.932138 800461 retry.go:31] will retry after 1.428178551s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 11:25:34.002518 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:34.002554 800461 retry.go:31] will retry after 681.832932ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:34.040831 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 11:25:34.155257 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:34.155340 800461 retry.go:31] will retry after 1.821310198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:34.685515 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1209 11:25:34.764780 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:34.764814 800461 retry.go:31] will retry after 2.480440108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:35.017257 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1209 11:25:35.124131 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:35.124165 800461 retry.go:31] will retry after 1.824937625s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:35.258763 800461 node_ready.go:53] error getting node "old-k8s-version-623695": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-623695": dial tcp 192.168.85.2:8443: connect: connection refused
I1209 11:25:35.361228 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 11:25:35.449129 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:35.449192 800461 retry.go:31] will retry after 1.536707217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:35.977231 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 11:25:36.073029 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:36.073066 800461 retry.go:31] will retry after 1.482200004s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:36.950140 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1209 11:25:36.986524 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1209 11:25:37.040354 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:37.040422 800461 retry.go:31] will retry after 2.836538474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1209 11:25:37.095998 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:37.096049 800461 retry.go:31] will retry after 3.467911443s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:37.245468 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1209 11:25:37.356729 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:37.356763 800461 retry.go:31] will retry after 3.882487674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:37.556187 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1209 11:25:37.666888 800461 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:37.666924 800461 retry.go:31] will retry after 2.230923411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1209 11:25:37.758561 800461 node_ready.go:53] error getting node "old-k8s-version-623695": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-623695": dial tcp 192.168.85.2:8443: connect: connection refused
I1209 11:25:39.878156 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1209 11:25:39.898514 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1209 11:25:40.564985 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1209 11:25:41.240009 800461 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1209 11:25:49.609899 800461 node_ready.go:49] node "old-k8s-version-623695" has status "Ready":"True"
I1209 11:25:49.609925 800461 node_ready.go:38] duration metric: took 18.852653029s for node "old-k8s-version-623695" to be "Ready" ...
I1209 11:25:49.609935 800461 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1209 11:25:49.846557 800461 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-pll5n" in "kube-system" namespace to be "Ready" ...
I1209 11:25:49.943548 800461 pod_ready.go:93] pod "coredns-74ff55c5b-pll5n" in "kube-system" namespace has status "Ready":"True"
I1209 11:25:49.943626 800461 pod_ready.go:82] duration metric: took 96.984304ms for pod "coredns-74ff55c5b-pll5n" in "kube-system" namespace to be "Ready" ...
I1209 11:25:49.943686 800461 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.002789 800461 pod_ready.go:93] pod "etcd-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"True"
I1209 11:25:50.002872 800461 pod_ready.go:82] duration metric: took 59.164551ms for pod "etcd-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.002904 800461 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.070711 800461 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"True"
I1209 11:25:50.070788 800461 pod_ready.go:82] duration metric: took 67.863665ms for pod "kube-apiserver-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.070816 800461 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.588295 800461 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"True"
I1209 11:25:50.588372 800461 pod_ready.go:82] duration metric: took 517.535912ms for pod "kube-controller-manager-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.588400 800461 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nftmg" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.603440 800461 pod_ready.go:93] pod "kube-proxy-nftmg" in "kube-system" namespace has status "Ready":"True"
I1209 11:25:50.603514 800461 pod_ready.go:82] duration metric: took 15.08574ms for pod "kube-proxy-nftmg" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.603542 800461 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
I1209 11:25:50.877395 800461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.999192521s)
I1209 11:25:51.026372 800461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.127774085s)
I1209 11:25:51.026468 800461 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-623695"
I1209 11:25:51.202494 800461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.637440809s)
I1209 11:25:51.202834 800461 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.962791314s)
I1209 11:25:51.205812 800461 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-623695 addons enable metrics-server
I1209 11:25:51.208913 800461 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
I1209 11:25:51.212051 800461 addons.go:510] duration metric: took 20.704913758s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
I1209 11:25:52.625412 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:25:55.110880 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:25:57.610316 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:25:59.611257 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:02.111059 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:04.111134 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:06.610287 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:08.616901 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:11.114544 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:13.611631 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:16.110572 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:18.110934 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:20.111935 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:22.610845 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:25.110954 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:27.610279 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:29.611193 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:31.611303 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:34.110581 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:36.111338 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:38.120073 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:40.610297 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:42.614550 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:45.114891 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:47.610163 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:50.112722 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:52.615239 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:55.111837 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:57.610113 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:26:59.610695 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:02.111921 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:04.611135 800461 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:06.611100 800461 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace has status "Ready":"True"
I1209 11:27:06.611126 800461 pod_ready.go:82] duration metric: took 1m16.007564198s for pod "kube-scheduler-old-k8s-version-623695" in "kube-system" namespace to be "Ready" ...
I1209 11:27:06.611139 800461 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace to be "Ready" ...
I1209 11:27:08.617505 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:10.617834 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:12.619601 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:15.144934 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:17.617034 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:19.617599 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:21.618035 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:23.619227 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:25.627097 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:28.118915 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:30.142388 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:32.619085 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:35.118726 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:37.122103 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:39.619385 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:42.119266 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:44.617266 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:46.618426 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:49.117806 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:51.118538 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:53.617617 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:55.617834 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:57.617899 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:27:59.619974 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:02.117761 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:04.119300 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:06.618390 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:09.118381 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:11.622952 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:14.117687 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:16.121672 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:18.618426 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:21.117958 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:23.118578 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:25.130617 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:27.618538 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:29.618910 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:32.118785 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:34.618500 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:37.118559 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:39.119055 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:41.618505 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:44.117630 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:46.118112 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:48.118655 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:50.617627 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:53.117901 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:55.118792 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:28:57.619062 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:00.183641 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:02.618152 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:05.118980 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:07.119245 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:09.618809 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:12.118459 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:14.119337 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:16.617934 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:19.118593 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:21.617424 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:23.617712 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:25.617986 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:27.618915 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:29.619981 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:32.118472 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:34.119167 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:36.617888 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:39.118642 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:41.202232 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:43.618179 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:46.118172 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:48.118656 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:50.618674 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:53.116882 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:55.117748 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:57.618147 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:29:59.619373 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:01.619430 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:04.117668 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:06.119115 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:08.617867 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:11.118821 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:13.617499 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:15.619669 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:18.117622 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:20.119694 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:22.618811 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:25.118847 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:27.617398 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:29.619912 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:31.685885 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:34.117668 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:36.118114 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:38.119922 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:40.618478 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:43.117903 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:45.130750 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:47.618316 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:49.625165 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:52.117848 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:54.118545 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:56.118884 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:30:58.617815 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:31:01.118929 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:31:03.617942 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:31:05.618817 800461 pod_ready.go:103] pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace has status "Ready":"False"
I1209 11:31:06.619335 800461 pod_ready.go:82] duration metric: took 4m0.008180545s for pod "metrics-server-9975d5f86-9pw69" in "kube-system" namespace to be "Ready" ...
E1209 11:31:06.619362 800461 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1209 11:31:06.619374 800461 pod_ready.go:39] duration metric: took 5m17.00942816s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1209 11:31:06.619391 800461 api_server.go:52] waiting for apiserver process to appear ...
I1209 11:31:06.619429 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1209 11:31:06.619498 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1209 11:31:06.665588 800461 cri.go:89] found id: "92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
I1209 11:31:06.665615 800461 cri.go:89] found id: "8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
I1209 11:31:06.665621 800461 cri.go:89] found id: ""
I1209 11:31:06.665629 800461 logs.go:282] 2 containers: [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265]
I1209 11:31:06.665689 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.669660 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.674395 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1209 11:31:06.674471 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1209 11:31:06.717779 800461 cri.go:89] found id: "c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
I1209 11:31:06.717802 800461 cri.go:89] found id: "2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
I1209 11:31:06.717807 800461 cri.go:89] found id: ""
I1209 11:31:06.717815 800461 logs.go:282] 2 containers: [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f]
I1209 11:31:06.717876 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.721820 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.725891 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1209 11:31:06.725964 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1209 11:31:06.779560 800461 cri.go:89] found id: "af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
I1209 11:31:06.779586 800461 cri.go:89] found id: "ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
I1209 11:31:06.779592 800461 cri.go:89] found id: ""
I1209 11:31:06.779600 800461 logs.go:282] 2 containers: [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478]
I1209 11:31:06.779663 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.783828 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.787756 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1209 11:31:06.787834 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1209 11:31:06.846135 800461 cri.go:89] found id: "ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
I1209 11:31:06.846161 800461 cri.go:89] found id: "0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
I1209 11:31:06.846166 800461 cri.go:89] found id: ""
I1209 11:31:06.846174 800461 logs.go:282] 2 containers: [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47]
I1209 11:31:06.846237 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.850232 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.854393 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1209 11:31:06.854467 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1209 11:31:06.901758 800461 cri.go:89] found id: "167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
I1209 11:31:06.901826 800461 cri.go:89] found id: "a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
I1209 11:31:06.901845 800461 cri.go:89] found id: ""
I1209 11:31:06.901870 800461 logs.go:282] 2 containers: [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e]
I1209 11:31:06.901962 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.906130 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.909943 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1209 11:31:06.910032 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1209 11:31:06.952413 800461 cri.go:89] found id: "8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
I1209 11:31:06.952495 800461 cri.go:89] found id: "25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
I1209 11:31:06.952515 800461 cri.go:89] found id: ""
I1209 11:31:06.952538 800461 logs.go:282] 2 containers: [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b]
I1209 11:31:06.952627 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.956769 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:06.960982 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1209 11:31:06.961110 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1209 11:31:07.003555 800461 cri.go:89] found id: "91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
I1209 11:31:07.003582 800461 cri.go:89] found id: "eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
I1209 11:31:07.003587 800461 cri.go:89] found id: ""
I1209 11:31:07.003595 800461 logs.go:282] 2 containers: [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44]
I1209 11:31:07.003768 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:07.010306 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:07.014583 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1209 11:31:07.014718 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1209 11:31:07.060225 800461 cri.go:89] found id: "b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
I1209 11:31:07.060256 800461 cri.go:89] found id: ""
I1209 11:31:07.060264 800461 logs.go:282] 1 containers: [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd]
I1209 11:31:07.060332 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:07.064188 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1209 11:31:07.064258 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1209 11:31:07.115430 800461 cri.go:89] found id: "663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
I1209 11:31:07.115452 800461 cri.go:89] found id: "1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
I1209 11:31:07.115457 800461 cri.go:89] found id: ""
I1209 11:31:07.115465 800461 logs.go:282] 2 containers: [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf]
I1209 11:31:07.115529 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:07.119609 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:07.123680 800461 logs.go:123] Gathering logs for coredns [ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478] ...
I1209 11:31:07.123708 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
I1209 11:31:07.174110 800461 logs.go:123] Gathering logs for kube-controller-manager [25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b] ...
I1209 11:31:07.174139 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
I1209 11:31:07.253927 800461 logs.go:123] Gathering logs for storage-provisioner [1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf] ...
I1209 11:31:07.253963 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
I1209 11:31:07.297941 800461 logs.go:123] Gathering logs for containerd ...
I1209 11:31:07.297970 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1209 11:31:07.363015 800461 logs.go:123] Gathering logs for container status ...
I1209 11:31:07.363062 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1209 11:31:07.415908 800461 logs.go:123] Gathering logs for describe nodes ...
I1209 11:31:07.415938 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1209 11:31:07.562065 800461 logs.go:123] Gathering logs for kube-apiserver [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2] ...
I1209 11:31:07.562098 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
I1209 11:31:07.628523 800461 logs.go:123] Gathering logs for coredns [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468] ...
I1209 11:31:07.628564 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
I1209 11:31:07.676860 800461 logs.go:123] Gathering logs for kube-scheduler [0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47] ...
I1209 11:31:07.676891 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
I1209 11:31:07.723056 800461 logs.go:123] Gathering logs for kindnet [eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44] ...
I1209 11:31:07.723091 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
I1209 11:31:07.780619 800461 logs.go:123] Gathering logs for kubernetes-dashboard [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd] ...
I1209 11:31:07.780653 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
I1209 11:31:07.823020 800461 logs.go:123] Gathering logs for kubelet ...
I1209 11:31:07.823047 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1209 11:31:07.878389 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528000 663 reflector.go:138] object-"kube-system"/"coredns-token-b78rj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b78rj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:07.878649 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528077 663 reflector.go:138] object-"kube-system"/"kindnet-token-nl827": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nl827" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:07.878882 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532699 663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-sw5w9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-sw5w9" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:07.879083 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532801 663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:07.879293 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532864 663 reflector.go:138] object-"default"/"default-token-pgtqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pgtqr" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:07.879510 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532917 663 reflector.go:138] object-"kube-system"/"kube-proxy-token-tnwqj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-tnwqj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:07.879733 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532965 663 reflector.go:138] object-"kube-system"/"metrics-server-token-hcpl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hcpl8" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:07.879941 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.533017 663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:07.890312 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.720038 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:07.890526 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.747865 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.893612 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:03 old-k8s-version-623695 kubelet[663]: E1209 11:26:03.558736 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:07.895693 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:14 old-k8s-version-623695 kubelet[663]: E1209 11:26:14.883179 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.895879 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.549936 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.896212 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.890882 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.896874 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:19 old-k8s-version-623695 kubelet[663]: E1209 11:26:19.928539 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.897362 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:21 old-k8s-version-623695 kubelet[663]: E1209 11:26:21.930997 663 pod_workers.go:191] Error syncing pod a4b9e510-c334-4949-a8ad-1f3f41854e03 ("storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"
W1209 11:31:07.899800 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:28 old-k8s-version-623695 kubelet[663]: E1209 11:26:28.556809 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:07.900858 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:33 old-k8s-version-623695 kubelet[663]: E1209 11:26:33.968067 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.901194 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:39 old-k8s-version-623695 kubelet[663]: E1209 11:26:39.927886 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.901381 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:42 old-k8s-version-623695 kubelet[663]: E1209 11:26:42.551170 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.901727 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:52 old-k8s-version-623695 kubelet[663]: E1209 11:26:52.546429 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.901913 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:53 old-k8s-version-623695 kubelet[663]: E1209 11:26:53.546792 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.902096 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:04 old-k8s-version-623695 kubelet[663]: E1209 11:27:04.547503 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.902691 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:06 old-k8s-version-623695 kubelet[663]: E1209 11:27:06.138491 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.903020 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:09 old-k8s-version-623695 kubelet[663]: E1209 11:27:09.927608 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.905540 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:15 old-k8s-version-623695 kubelet[663]: E1209 11:27:15.558173 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:07.905873 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:22 old-k8s-version-623695 kubelet[663]: E1209 11:27:22.550284 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.906057 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:29 old-k8s-version-623695 kubelet[663]: E1209 11:27:29.559056 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.906385 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:35 old-k8s-version-623695 kubelet[663]: E1209 11:27:35.546799 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.906569 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:43 old-k8s-version-623695 kubelet[663]: E1209 11:27:43.546652 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.907158 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:47 old-k8s-version-623695 kubelet[663]: E1209 11:27:47.281093 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.907490 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:49 old-k8s-version-623695 kubelet[663]: E1209 11:27:49.927704 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.907675 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:55 old-k8s-version-623695 kubelet[663]: E1209 11:27:55.546651 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.908004 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:03 old-k8s-version-623695 kubelet[663]: E1209 11:28:03.546208 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.908219 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:10 old-k8s-version-623695 kubelet[663]: E1209 11:28:10.546626 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.908550 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:15 old-k8s-version-623695 kubelet[663]: E1209 11:28:15.546835 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.908738 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:23 old-k8s-version-623695 kubelet[663]: E1209 11:28:23.546586 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.909067 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:26 old-k8s-version-623695 kubelet[663]: E1209 11:28:26.546890 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.911535 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:38 old-k8s-version-623695 kubelet[663]: E1209 11:28:38.563060 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:07.911870 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:40 old-k8s-version-623695 kubelet[663]: E1209 11:28:40.546787 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.912055 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:50 old-k8s-version-623695 kubelet[663]: E1209 11:28:50.546870 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.912386 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:55 old-k8s-version-623695 kubelet[663]: E1209 11:28:55.546234 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.912570 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:03 old-k8s-version-623695 kubelet[663]: E1209 11:29:03.546823 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.913179 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:10 old-k8s-version-623695 kubelet[663]: E1209 11:29:10.508888 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.913366 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:17 old-k8s-version-623695 kubelet[663]: E1209 11:29:17.546618 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.913697 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:19 old-k8s-version-623695 kubelet[663]: E1209 11:29:19.928082 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.913886 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:28 old-k8s-version-623695 kubelet[663]: E1209 11:29:28.547234 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.914214 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:31 old-k8s-version-623695 kubelet[663]: E1209 11:29:31.546227 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.914401 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:41 old-k8s-version-623695 kubelet[663]: E1209 11:29:41.546721 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.914730 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:43 old-k8s-version-623695 kubelet[663]: E1209 11:29:43.546444 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.915057 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.547387 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.915242 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.548186 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.915570 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:09 old-k8s-version-623695 kubelet[663]: E1209 11:30:09.546239 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.915756 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:11 old-k8s-version-623695 kubelet[663]: E1209 11:30:11.546522 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.916089 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: E1209 11:30:20.547174 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.916274 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:26 old-k8s-version-623695 kubelet[663]: E1209 11:30:26.547093 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.916601 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: E1209 11:30:33.546231 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.916786 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:39 old-k8s-version-623695 kubelet[663]: E1209 11:30:39.546660 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.917118 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.917310 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.917646 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:07.917830 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:07.918159 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
I1209 11:31:07.918170 800461 logs.go:123] Gathering logs for dmesg ...
I1209 11:31:07.918185 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1209 11:31:07.942237 800461 logs.go:123] Gathering logs for kube-apiserver [8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265] ...
I1209 11:31:07.942269 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
I1209 11:31:08.010303 800461 logs.go:123] Gathering logs for etcd [2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f] ...
I1209 11:31:08.010402 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
I1209 11:31:08.069628 800461 logs.go:123] Gathering logs for kube-scheduler [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3] ...
I1209 11:31:08.069669 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
I1209 11:31:08.117522 800461 logs.go:123] Gathering logs for kube-proxy [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98] ...
I1209 11:31:08.117559 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
I1209 11:31:08.160325 800461 logs.go:123] Gathering logs for storage-provisioner [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465] ...
I1209 11:31:08.160413 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
I1209 11:31:08.241337 800461 logs.go:123] Gathering logs for etcd [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5] ...
I1209 11:31:08.241368 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
I1209 11:31:08.299507 800461 logs.go:123] Gathering logs for kube-proxy [a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e] ...
I1209 11:31:08.299537 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
I1209 11:31:08.349466 800461 logs.go:123] Gathering logs for kube-controller-manager [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa] ...
I1209 11:31:08.349574 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
I1209 11:31:08.415926 800461 logs.go:123] Gathering logs for kindnet [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3] ...
I1209 11:31:08.415965 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
I1209 11:31:08.486089 800461 out.go:358] Setting ErrFile to fd 2...
I1209 11:31:08.486164 800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1209 11:31:08.486253 800461 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1209 11:31:08.486291 800461 out.go:270] Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:08.486335 800461 out.go:270] Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:08.486367 800461 out.go:270] Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:08.486408 800461 out.go:270] Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:08.486473 800461 out.go:270] Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
I1209 11:31:08.486487 800461 out.go:358] Setting ErrFile to fd 2...
I1209 11:31:08.486493 800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 11:31:18.488559 800461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 11:31:18.503305 800461 api_server.go:72] duration metric: took 5m47.996568848s to wait for apiserver process to appear ...
I1209 11:31:18.503330 800461 api_server.go:88] waiting for apiserver healthz status ...
I1209 11:31:18.503367 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1209 11:31:18.503422 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1209 11:31:18.586637 800461 cri.go:89] found id: "92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
I1209 11:31:18.586658 800461 cri.go:89] found id: "8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
I1209 11:31:18.586663 800461 cri.go:89] found id: ""
I1209 11:31:18.586670 800461 logs.go:282] 2 containers: [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265]
I1209 11:31:18.586732 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.592662 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.597005 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1209 11:31:18.597082 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1209 11:31:18.650624 800461 cri.go:89] found id: "c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
I1209 11:31:18.650643 800461 cri.go:89] found id: "2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
I1209 11:31:18.650648 800461 cri.go:89] found id: ""
I1209 11:31:18.650655 800461 logs.go:282] 2 containers: [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f]
I1209 11:31:18.650714 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.655082 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.659058 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1209 11:31:18.659127 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1209 11:31:18.716242 800461 cri.go:89] found id: "af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
I1209 11:31:18.716262 800461 cri.go:89] found id: "ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
I1209 11:31:18.716267 800461 cri.go:89] found id: ""
I1209 11:31:18.716275 800461 logs.go:282] 2 containers: [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478]
I1209 11:31:18.716332 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.721120 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.725267 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1209 11:31:18.725399 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1209 11:31:18.784506 800461 cri.go:89] found id: "ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
I1209 11:31:18.784578 800461 cri.go:89] found id: "0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
I1209 11:31:18.784586 800461 cri.go:89] found id: ""
I1209 11:31:18.784593 800461 logs.go:282] 2 containers: [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47]
I1209 11:31:18.784683 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.789471 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.793630 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1209 11:31:18.793751 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1209 11:31:18.875516 800461 cri.go:89] found id: "167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
I1209 11:31:18.875610 800461 cri.go:89] found id: "a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
I1209 11:31:18.875642 800461 cri.go:89] found id: ""
I1209 11:31:18.875671 800461 logs.go:282] 2 containers: [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e]
I1209 11:31:18.875795 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.882901 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.891490 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1209 11:31:18.891681 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1209 11:31:19.133994 800461 cri.go:89] found id: "8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
I1209 11:31:19.134060 800461 cri.go:89] found id: "25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
I1209 11:31:19.134086 800461 cri.go:89] found id: ""
I1209 11:31:19.134106 800461 logs.go:282] 2 containers: [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b]
I1209 11:31:19.134198 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.139026 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.143699 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1209 11:31:19.143825 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1209 11:31:19.193447 800461 cri.go:89] found id: "91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
I1209 11:31:19.193537 800461 cri.go:89] found id: "eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
I1209 11:31:19.193560 800461 cri.go:89] found id: ""
I1209 11:31:19.193579 800461 logs.go:282] 2 containers: [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44]
I1209 11:31:19.193678 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.198061 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.202246 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1209 11:31:19.202370 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1209 11:31:19.262303 800461 cri.go:89] found id: "b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
I1209 11:31:19.262375 800461 cri.go:89] found id: ""
I1209 11:31:19.262400 800461 logs.go:282] 1 containers: [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd]
I1209 11:31:19.262483 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.266675 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1209 11:31:19.266798 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1209 11:31:19.321269 800461 cri.go:89] found id: "663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
I1209 11:31:19.321342 800461 cri.go:89] found id: "1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
I1209 11:31:19.321361 800461 cri.go:89] found id: ""
I1209 11:31:19.321380 800461 logs.go:282] 2 containers: [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf]
I1209 11:31:19.321461 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.326755 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.331149 800461 logs.go:123] Gathering logs for kube-scheduler [0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47] ...
I1209 11:31:19.331228 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
I1209 11:31:19.386882 800461 logs.go:123] Gathering logs for kindnet [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3] ...
I1209 11:31:19.386967 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
I1209 11:31:19.452242 800461 logs.go:123] Gathering logs for containerd ...
I1209 11:31:19.452325 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1209 11:31:19.525862 800461 logs.go:123] Gathering logs for dmesg ...
I1209 11:31:19.525954 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1209 11:31:19.545007 800461 logs.go:123] Gathering logs for describe nodes ...
I1209 11:31:19.545090 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1209 11:31:19.724757 800461 logs.go:123] Gathering logs for kube-apiserver [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2] ...
I1209 11:31:19.724789 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
I1209 11:31:19.826367 800461 logs.go:123] Gathering logs for etcd [2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f] ...
I1209 11:31:19.826408 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
I1209 11:31:19.908141 800461 logs.go:123] Gathering logs for coredns [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468] ...
I1209 11:31:19.908227 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
I1209 11:31:19.988919 800461 logs.go:123] Gathering logs for coredns [ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478] ...
I1209 11:31:19.988947 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
I1209 11:31:20.059667 800461 logs.go:123] Gathering logs for kube-proxy [a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e] ...
I1209 11:31:20.059707 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
I1209 11:31:20.122817 800461 logs.go:123] Gathering logs for kindnet [eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44] ...
I1209 11:31:20.122865 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
I1209 11:31:20.176945 800461 logs.go:123] Gathering logs for kube-apiserver [8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265] ...
I1209 11:31:20.176975 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
I1209 11:31:20.248500 800461 logs.go:123] Gathering logs for kube-scheduler [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3] ...
I1209 11:31:20.248577 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
I1209 11:31:20.296022 800461 logs.go:123] Gathering logs for kube-controller-manager [25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b] ...
I1209 11:31:20.296050 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
I1209 11:31:20.388485 800461 logs.go:123] Gathering logs for kubernetes-dashboard [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd] ...
I1209 11:31:20.388571 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
I1209 11:31:20.438386 800461 logs.go:123] Gathering logs for storage-provisioner [1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf] ...
I1209 11:31:20.438415 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
I1209 11:31:20.479553 800461 logs.go:123] Gathering logs for container status ...
I1209 11:31:20.479584 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1209 11:31:20.547014 800461 logs.go:123] Gathering logs for kubelet ...
I1209 11:31:20.547045 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1209 11:31:20.602779 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528000 663 reflector.go:138] object-"kube-system"/"coredns-token-b78rj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b78rj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603031 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528077 663 reflector.go:138] object-"kube-system"/"kindnet-token-nl827": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nl827" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603261 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532699 663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-sw5w9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-sw5w9" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603459 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532801 663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603665 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532864 663 reflector.go:138] object-"default"/"default-token-pgtqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pgtqr" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603875 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532917 663 reflector.go:138] object-"kube-system"/"kube-proxy-token-tnwqj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-tnwqj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.604167 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532965 663 reflector.go:138] object-"kube-system"/"metrics-server-token-hcpl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hcpl8" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.604377 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.533017 663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.614712 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.720038 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.614911 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.747865 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.617926 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:03 old-k8s-version-623695 kubelet[663]: E1209 11:26:03.558736 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.620029 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:14 old-k8s-version-623695 kubelet[663]: E1209 11:26:14.883179 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.620222 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.549936 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.620552 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.890882 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.621216 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:19 old-k8s-version-623695 kubelet[663]: E1209 11:26:19.928539 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.621656 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:21 old-k8s-version-623695 kubelet[663]: E1209 11:26:21.930997 663 pod_workers.go:191] Error syncing pod a4b9e510-c334-4949-a8ad-1f3f41854e03 ("storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"
W1209 11:31:20.624090 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:28 old-k8s-version-623695 kubelet[663]: E1209 11:26:28.556809 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.625177 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:33 old-k8s-version-623695 kubelet[663]: E1209 11:26:33.968067 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.625506 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:39 old-k8s-version-623695 kubelet[663]: E1209 11:26:39.927886 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.625700 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:42 old-k8s-version-623695 kubelet[663]: E1209 11:26:42.551170 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.626029 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:52 old-k8s-version-623695 kubelet[663]: E1209 11:26:52.546429 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.626212 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:53 old-k8s-version-623695 kubelet[663]: E1209 11:26:53.546792 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.626396 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:04 old-k8s-version-623695 kubelet[663]: E1209 11:27:04.547503 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.626985 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:06 old-k8s-version-623695 kubelet[663]: E1209 11:27:06.138491 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.627309 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:09 old-k8s-version-623695 kubelet[663]: E1209 11:27:09.927608 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.629742 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:15 old-k8s-version-623695 kubelet[663]: E1209 11:27:15.558173 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.630068 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:22 old-k8s-version-623695 kubelet[663]: E1209 11:27:22.550284 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.630252 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:29 old-k8s-version-623695 kubelet[663]: E1209 11:27:29.559056 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.630574 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:35 old-k8s-version-623695 kubelet[663]: E1209 11:27:35.546799 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.630756 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:43 old-k8s-version-623695 kubelet[663]: E1209 11:27:43.546652 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.631337 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:47 old-k8s-version-623695 kubelet[663]: E1209 11:27:47.281093 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.631667 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:49 old-k8s-version-623695 kubelet[663]: E1209 11:27:49.927704 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.631851 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:55 old-k8s-version-623695 kubelet[663]: E1209 11:27:55.546651 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.632178 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:03 old-k8s-version-623695 kubelet[663]: E1209 11:28:03.546208 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.632359 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:10 old-k8s-version-623695 kubelet[663]: E1209 11:28:10.546626 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.632692 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:15 old-k8s-version-623695 kubelet[663]: E1209 11:28:15.546835 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.632876 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:23 old-k8s-version-623695 kubelet[663]: E1209 11:28:23.546586 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.633206 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:26 old-k8s-version-623695 kubelet[663]: E1209 11:28:26.546890 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.635619 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:38 old-k8s-version-623695 kubelet[663]: E1209 11:28:38.563060 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.635943 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:40 old-k8s-version-623695 kubelet[663]: E1209 11:28:40.546787 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.636130 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:50 old-k8s-version-623695 kubelet[663]: E1209 11:28:50.546870 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.636454 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:55 old-k8s-version-623695 kubelet[663]: E1209 11:28:55.546234 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.636636 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:03 old-k8s-version-623695 kubelet[663]: E1209 11:29:03.546823 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.637224 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:10 old-k8s-version-623695 kubelet[663]: E1209 11:29:10.508888 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.637408 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:17 old-k8s-version-623695 kubelet[663]: E1209 11:29:17.546618 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.637739 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:19 old-k8s-version-623695 kubelet[663]: E1209 11:29:19.928082 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.637921 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:28 old-k8s-version-623695 kubelet[663]: E1209 11:29:28.547234 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.638245 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:31 old-k8s-version-623695 kubelet[663]: E1209 11:29:31.546227 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.638429 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:41 old-k8s-version-623695 kubelet[663]: E1209 11:29:41.546721 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.638756 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:43 old-k8s-version-623695 kubelet[663]: E1209 11:29:43.546444 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.639078 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.547387 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.639264 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.548186 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.639608 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:09 old-k8s-version-623695 kubelet[663]: E1209 11:30:09.546239 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.639791 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:11 old-k8s-version-623695 kubelet[663]: E1209 11:30:11.546522 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.640114 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: E1209 11:30:20.547174 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.640296 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:26 old-k8s-version-623695 kubelet[663]: E1209 11:30:26.547093 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.640619 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: E1209 11:30:33.546231 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.640802 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:39 old-k8s-version-623695 kubelet[663]: E1209 11:30:39.546660 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.641125 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.641326 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.641653 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.641835 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.642158 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.642482 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.644887 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
I1209 11:31:20.644899 800461 logs.go:123] Gathering logs for etcd [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5] ...
I1209 11:31:20.644916 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
I1209 11:31:20.702944 800461 logs.go:123] Gathering logs for kube-proxy [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98] ...
I1209 11:31:20.702973 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
I1209 11:31:20.752172 800461 logs.go:123] Gathering logs for kube-controller-manager [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa] ...
I1209 11:31:20.752205 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
I1209 11:31:20.826676 800461 logs.go:123] Gathering logs for storage-provisioner [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465] ...
I1209 11:31:20.826715 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
I1209 11:31:20.876353 800461 out.go:358] Setting ErrFile to fd 2...
I1209 11:31:20.876379 800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1209 11:31:20.876431 800461 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1209 11:31:20.876458 800461 out.go:270] Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.876476 800461 out.go:270] Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.876492 800461 out.go:270] Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.876499 800461 out.go:270] Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.876505 800461 out.go:270] Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
I1209 11:31:20.876532 800461 out.go:358] Setting ErrFile to fd 2...
I1209 11:31:20.876539 800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 11:31:30.876747 800461 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1209 11:31:30.888585 800461 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I1209 11:31:30.891227 800461 out.go:201]
W1209 11:31:30.893313 800461 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1209 11:31:30.893353 800461 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W1209 11:31:30.893369 800461 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W1209 11:31:30.893375 800461 out.go:270] *
*
W1209 11:31:30.894594 800461 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 11:31:30.897267 800461 out.go:201]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-623695 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-623695
helpers_test.go:235: (dbg) docker inspect old-k8s-version-623695:
-- stdout --
[
{
"Id": "e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5",
"Created": "2024-12-09T11:22:11.587866445Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 800659,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-12-09T11:25:22.633046099Z",
"FinishedAt": "2024-12-09T11:25:21.459511628Z"
},
"Image": "sha256:51526bd7c0894c18bc1ef50650a0aaaea3bed24f70f72f77ac668ae72dfff137",
"ResolvConfPath": "/var/lib/docker/containers/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5/hostname",
"HostsPath": "/var/lib/docker/containers/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5/hosts",
"LogPath": "/var/lib/docker/containers/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5/e35e17296ac8fd4561f5e7477248d51f221402d3e58f2cf3e27d2d85941d98c5-json.log",
"Name": "/old-k8s-version-623695",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-623695:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-623695",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/5bca33ee280dba065127f98b38db67b119e9597628188ae34adc7a04adbbf9c1-init/diff:/var/lib/docker/overlay2/3061263481abb42050cdf79a3c56b934922c719b93d67b858ded630617e658c8/diff",
"MergedDir": "/var/lib/docker/overlay2/5bca33ee280dba065127f98b38db67b119e9597628188ae34adc7a04adbbf9c1/merged",
"UpperDir": "/var/lib/docker/overlay2/5bca33ee280dba065127f98b38db67b119e9597628188ae34adc7a04adbbf9c1/diff",
"WorkDir": "/var/lib/docker/overlay2/5bca33ee280dba065127f98b38db67b119e9597628188ae34adc7a04adbbf9c1/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-623695",
"Source": "/var/lib/docker/volumes/old-k8s-version-623695/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-623695",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-623695",
"name.minikube.sigs.k8s.io": "old-k8s-version-623695",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "7286a6b444dce53d5080d1c9ed89ae73a5e1e30dec021a0d11def1fb422c9b19",
"SandboxKey": "/var/run/docker/netns/7286a6b444dc",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33802"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33803"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33806"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33804"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33805"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-623695": {
"IPAMConfig": {
"IPv4Address": "192.168.85.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:55:02",
"DriverOpts": null,
"NetworkID": "f91d291d02499a47c4dfd84c18f2598dde0c3e4a5e25fd0978ece0d31c6395da",
"EndpointID": "2d7ff9836cad68f7a978931db7c76df06d2519df4b92b6bfc0214b7e27909fa8",
"Gateway": "192.168.85.1",
"IPAddress": "192.168.85.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-623695",
"e35e17296ac8"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-623695 -n old-k8s-version-623695
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-623695 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-623695 logs -n 25: (2.729826447s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-528742 | cert-expiration-528742 | jenkins | v1.34.0 | 09 Dec 24 11:20 UTC | 09 Dec 24 11:21 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-377461 | force-systemd-env-377461 | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:21 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-377461 | force-systemd-env-377461 | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:21 UTC |
| start | -p cert-options-724611 | cert-options-724611 | jenkins | v1.34.0 | 09 Dec 24 11:21 UTC | 09 Dec 24 11:22 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-724611 ssh | cert-options-724611 | jenkins | v1.34.0 | 09 Dec 24 11:22 UTC | 09 Dec 24 11:22 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-724611 -- sudo | cert-options-724611 | jenkins | v1.34.0 | 09 Dec 24 11:22 UTC | 09 Dec 24 11:22 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-724611 | cert-options-724611 | jenkins | v1.34.0 | 09 Dec 24 11:22 UTC | 09 Dec 24 11:22 UTC |
| start | -p old-k8s-version-623695 | old-k8s-version-623695 | jenkins | v1.34.0 | 09 Dec 24 11:22 UTC | 09 Dec 24 11:24 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-528742 | cert-expiration-528742 | jenkins | v1.34.0 | 09 Dec 24 11:24 UTC | 09 Dec 24 11:24 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-528742 | cert-expiration-528742 | jenkins | v1.34.0 | 09 Dec 24 11:24 UTC | 09 Dec 24 11:24 UTC |
| start | -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:24 UTC | 09 Dec 24 11:26 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-623695 | old-k8s-version-623695 | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | 09 Dec 24 11:25 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-623695 | old-k8s-version-623695 | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | 09 Dec 24 11:25 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-623695 | old-k8s-version-623695 | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | 09 Dec 24 11:25 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-623695 | old-k8s-version-623695 | jenkins | v1.34.0 | 09 Dec 24 11:25 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:26 UTC | 09 Dec 24 11:26 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:26 UTC | 09 Dec 24 11:26 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:26 UTC | 09 Dec 24 11:26 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:26 UTC | 09 Dec 24 11:30 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| image | no-preload-239649 image list | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
| delete | -p no-preload-239649 | no-preload-239649 | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | 09 Dec 24 11:31 UTC |
| start | -p embed-certs-545509 | embed-certs-545509 | jenkins | v1.34.0 | 09 Dec 24 11:31 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/12/09 11:31:15
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.23.2 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1209 11:31:15.573428 811348 out.go:345] Setting OutFile to fd 1 ...
I1209 11:31:15.573598 811348 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 11:31:15.573609 811348 out.go:358] Setting ErrFile to fd 2...
I1209 11:31:15.573614 811348 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 11:31:15.573990 811348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20068-586689/.minikube/bin
I1209 11:31:15.574528 811348 out.go:352] Setting JSON to false
I1209 11:31:15.576393 811348 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":15223,"bootTime":1733728653,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I1209 11:31:15.576518 811348 start.go:139] virtualization:
I1209 11:31:15.579470 811348 out.go:177] * [embed-certs-545509] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1209 11:31:15.581883 811348 out.go:177] - MINIKUBE_LOCATION=20068
I1209 11:31:15.581966 811348 notify.go:220] Checking for updates...
I1209 11:31:15.583934 811348 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1209 11:31:15.586164 811348 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20068-586689/kubeconfig
I1209 11:31:15.588317 811348 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20068-586689/.minikube
I1209 11:31:15.590247 811348 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1209 11:31:15.592093 811348 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1209 11:31:15.594616 811348 config.go:182] Loaded profile config "old-k8s-version-623695": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1209 11:31:15.594778 811348 driver.go:394] Setting default libvirt URI to qemu:///system
I1209 11:31:15.621007 811348 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1209 11:31:15.621133 811348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 11:31:15.683949 811348 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 11:31:15.673895775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1209 11:31:15.684077 811348 docker.go:318] overlay module found
I1209 11:31:15.686476 811348 out.go:177] * Using the docker driver based on user configuration
I1209 11:31:15.688687 811348 start.go:297] selected driver: docker
I1209 11:31:15.688721 811348 start.go:901] validating driver "docker" against <nil>
I1209 11:31:15.688736 811348 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1209 11:31:15.689637 811348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1209 11:31:15.748631 811348 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-09 11:31:15.734044719 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1209 11:31:15.748867 811348 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1209 11:31:15.749124 811348 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1209 11:31:15.752097 811348 out.go:177] * Using Docker driver with root privileges
I1209 11:31:15.754683 811348 cni.go:84] Creating CNI manager for ""
I1209 11:31:15.754775 811348 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 11:31:15.754790 811348 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I1209 11:31:15.754921 811348 start.go:340] cluster config:
{Name:embed-certs-545509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 11:31:15.757449 811348 out.go:177] * Starting "embed-certs-545509" primary control-plane node in "embed-certs-545509" cluster
I1209 11:31:15.759458 811348 cache.go:121] Beginning downloading kic base image for docker with containerd
I1209 11:31:15.761848 811348 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
I1209 11:31:15.764070 811348 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1209 11:31:15.764112 811348 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
I1209 11:31:15.764144 811348 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
I1209 11:31:15.764154 811348 cache.go:56] Caching tarball of preloaded images
I1209 11:31:15.764238 811348 preload.go:172] Found /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1209 11:31:15.764248 811348 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
I1209 11:31:15.764353 811348 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/config.json ...
I1209 11:31:15.764371 811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/config.json: {Name:mkfdc7f72bbc29f4fa6ffde9e5c99fe240224f1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:31:15.785432 811348 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon, skipping pull
I1209 11:31:15.785458 811348 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in daemon, skipping load
I1209 11:31:15.785478 811348 cache.go:194] Successfully downloaded all kic artifacts
I1209 11:31:15.785503 811348 start.go:360] acquireMachinesLock for embed-certs-545509: {Name:mk66bd73395460001b6da093a04d1bc9ddd88855 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1209 11:31:15.786280 811348 start.go:364] duration metric: took 716.878µs to acquireMachinesLock for "embed-certs-545509"
I1209 11:31:15.786322 811348 start.go:93] Provisioning new machine with config: &{Name:embed-certs-545509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1209 11:31:15.786421 811348 start.go:125] createHost starting for "" (driver="docker")
I1209 11:31:15.789913 811348 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I1209 11:31:15.790235 811348 start.go:159] libmachine.API.Create for "embed-certs-545509" (driver="docker")
I1209 11:31:15.790274 811348 client.go:168] LocalClient.Create starting
I1209 11:31:15.790350 811348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem
I1209 11:31:15.790394 811348 main.go:141] libmachine: Decoding PEM data...
I1209 11:31:15.790407 811348 main.go:141] libmachine: Parsing certificate...
I1209 11:31:15.790462 811348 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem
I1209 11:31:15.790485 811348 main.go:141] libmachine: Decoding PEM data...
I1209 11:31:15.790498 811348 main.go:141] libmachine: Parsing certificate...
I1209 11:31:15.790880 811348 cli_runner.go:164] Run: docker network inspect embed-certs-545509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1209 11:31:15.807890 811348 cli_runner.go:211] docker network inspect embed-certs-545509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1209 11:31:15.807973 811348 network_create.go:284] running [docker network inspect embed-certs-545509] to gather additional debugging logs...
I1209 11:31:15.807994 811348 cli_runner.go:164] Run: docker network inspect embed-certs-545509
W1209 11:31:15.837264 811348 cli_runner.go:211] docker network inspect embed-certs-545509 returned with exit code 1
I1209 11:31:15.837294 811348 network_create.go:287] error running [docker network inspect embed-certs-545509]: docker network inspect embed-certs-545509: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-545509 not found
I1209 11:31:15.837309 811348 network_create.go:289] output of [docker network inspect embed-certs-545509]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-545509 not found
** /stderr **
I1209 11:31:15.837419 811348 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 11:31:15.854075 811348 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f46af3becfa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:b8:cb:97:e7} reservation:<nil>}
I1209 11:31:15.854807 811348 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2b0b14c10880 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:64:32:77:d9} reservation:<nil>}
I1209 11:31:15.855455 811348 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-dc4622f79210 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ff:ac:62:29} reservation:<nil>}
I1209 11:31:15.856120 811348 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a02c60}
I1209 11:31:15.856167 811348 network_create.go:124] attempt to create docker network embed-certs-545509 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
I1209 11:31:15.856233 811348 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-545509 embed-certs-545509
I1209 11:31:15.940127 811348 network_create.go:108] docker network embed-certs-545509 192.168.76.0/24 created
I1209 11:31:15.940158 811348 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-545509" container
I1209 11:31:15.940230 811348 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1209 11:31:15.955717 811348 cli_runner.go:164] Run: docker volume create embed-certs-545509 --label name.minikube.sigs.k8s.io=embed-certs-545509 --label created_by.minikube.sigs.k8s.io=true
I1209 11:31:15.971905 811348 oci.go:103] Successfully created a docker volume embed-certs-545509
I1209 11:31:15.971993 811348 cli_runner.go:164] Run: docker run --rm --name embed-certs-545509-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-545509 --entrypoint /usr/bin/test -v embed-certs-545509:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
I1209 11:31:16.652875 811348 oci.go:107] Successfully prepared a docker volume embed-certs-545509
I1209 11:31:16.652926 811348 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1209 11:31:16.652948 811348 kic.go:194] Starting extracting preloaded images to volume ...
I1209 11:31:16.653022 811348 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-545509:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
I1209 11:31:18.488559 800461 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1209 11:31:18.503305 800461 api_server.go:72] duration metric: took 5m47.996568848s to wait for apiserver process to appear ...
I1209 11:31:18.503330 800461 api_server.go:88] waiting for apiserver healthz status ...
I1209 11:31:18.503367 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1209 11:31:18.503422 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1209 11:31:18.586637 800461 cri.go:89] found id: "92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
I1209 11:31:18.586658 800461 cri.go:89] found id: "8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
I1209 11:31:18.586663 800461 cri.go:89] found id: ""
I1209 11:31:18.586670 800461 logs.go:282] 2 containers: [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265]
I1209 11:31:18.586732 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.592662 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.597005 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1209 11:31:18.597082 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1209 11:31:18.650624 800461 cri.go:89] found id: "c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
I1209 11:31:18.650643 800461 cri.go:89] found id: "2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
I1209 11:31:18.650648 800461 cri.go:89] found id: ""
I1209 11:31:18.650655 800461 logs.go:282] 2 containers: [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f]
I1209 11:31:18.650714 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.655082 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.659058 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1209 11:31:18.659127 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1209 11:31:18.716242 800461 cri.go:89] found id: "af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
I1209 11:31:18.716262 800461 cri.go:89] found id: "ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
I1209 11:31:18.716267 800461 cri.go:89] found id: ""
I1209 11:31:18.716275 800461 logs.go:282] 2 containers: [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478]
I1209 11:31:18.716332 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.721120 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.725267 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1209 11:31:18.725399 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1209 11:31:18.784506 800461 cri.go:89] found id: "ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
I1209 11:31:18.784578 800461 cri.go:89] found id: "0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
I1209 11:31:18.784586 800461 cri.go:89] found id: ""
I1209 11:31:18.784593 800461 logs.go:282] 2 containers: [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47]
I1209 11:31:18.784683 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.789471 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.793630 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1209 11:31:18.793751 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1209 11:31:18.875516 800461 cri.go:89] found id: "167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
I1209 11:31:18.875610 800461 cri.go:89] found id: "a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
I1209 11:31:18.875642 800461 cri.go:89] found id: ""
I1209 11:31:18.875671 800461 logs.go:282] 2 containers: [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e]
I1209 11:31:18.875795 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.882901 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:18.891490 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1209 11:31:18.891681 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1209 11:31:19.133994 800461 cri.go:89] found id: "8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
I1209 11:31:19.134060 800461 cri.go:89] found id: "25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
I1209 11:31:19.134086 800461 cri.go:89] found id: ""
I1209 11:31:19.134106 800461 logs.go:282] 2 containers: [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b]
I1209 11:31:19.134198 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.139026 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.143699 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1209 11:31:19.143825 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1209 11:31:19.193447 800461 cri.go:89] found id: "91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
I1209 11:31:19.193537 800461 cri.go:89] found id: "eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
I1209 11:31:19.193560 800461 cri.go:89] found id: ""
I1209 11:31:19.193579 800461 logs.go:282] 2 containers: [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44]
I1209 11:31:19.193678 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.198061 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.202246 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1209 11:31:19.202370 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1209 11:31:19.262303 800461 cri.go:89] found id: "b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
I1209 11:31:19.262375 800461 cri.go:89] found id: ""
I1209 11:31:19.262400 800461 logs.go:282] 1 containers: [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd]
I1209 11:31:19.262483 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.266675 800461 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1209 11:31:19.266798 800461 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1209 11:31:19.321269 800461 cri.go:89] found id: "663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
I1209 11:31:19.321342 800461 cri.go:89] found id: "1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
I1209 11:31:19.321361 800461 cri.go:89] found id: ""
I1209 11:31:19.321380 800461 logs.go:282] 2 containers: [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf]
I1209 11:31:19.321461 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.326755 800461 ssh_runner.go:195] Run: which crictl
I1209 11:31:19.331149 800461 logs.go:123] Gathering logs for kube-scheduler [0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47] ...
I1209 11:31:19.331228 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47"
I1209 11:31:19.386882 800461 logs.go:123] Gathering logs for kindnet [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3] ...
I1209 11:31:19.386967 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3"
I1209 11:31:19.452242 800461 logs.go:123] Gathering logs for containerd ...
I1209 11:31:19.452325 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1209 11:31:19.525862 800461 logs.go:123] Gathering logs for dmesg ...
I1209 11:31:19.525954 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1209 11:31:19.545007 800461 logs.go:123] Gathering logs for describe nodes ...
I1209 11:31:19.545090 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1209 11:31:19.724757 800461 logs.go:123] Gathering logs for kube-apiserver [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2] ...
I1209 11:31:19.724789 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2"
I1209 11:31:19.826367 800461 logs.go:123] Gathering logs for etcd [2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f] ...
I1209 11:31:19.826408 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f"
I1209 11:31:19.908141 800461 logs.go:123] Gathering logs for coredns [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468] ...
I1209 11:31:19.908227 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468"
I1209 11:31:19.988919 800461 logs.go:123] Gathering logs for coredns [ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478] ...
I1209 11:31:19.988947 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478"
I1209 11:31:20.059667 800461 logs.go:123] Gathering logs for kube-proxy [a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e] ...
I1209 11:31:20.059707 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e"
I1209 11:31:20.122817 800461 logs.go:123] Gathering logs for kindnet [eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44] ...
I1209 11:31:20.122865 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44"
I1209 11:31:20.176945 800461 logs.go:123] Gathering logs for kube-apiserver [8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265] ...
I1209 11:31:20.176975 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265"
I1209 11:31:20.248500 800461 logs.go:123] Gathering logs for kube-scheduler [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3] ...
I1209 11:31:20.248577 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3"
I1209 11:31:20.296022 800461 logs.go:123] Gathering logs for kube-controller-manager [25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b] ...
I1209 11:31:20.296050 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b"
I1209 11:31:20.388485 800461 logs.go:123] Gathering logs for kubernetes-dashboard [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd] ...
I1209 11:31:20.388571 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd"
I1209 11:31:20.438386 800461 logs.go:123] Gathering logs for storage-provisioner [1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf] ...
I1209 11:31:20.438415 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf"
I1209 11:31:20.479553 800461 logs.go:123] Gathering logs for container status ...
I1209 11:31:20.479584 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1209 11:31:20.547014 800461 logs.go:123] Gathering logs for kubelet ...
I1209 11:31:20.547045 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1209 11:31:20.602779 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528000 663 reflector.go:138] object-"kube-system"/"coredns-token-b78rj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-b78rj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603031 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.528077 663 reflector.go:138] object-"kube-system"/"kindnet-token-nl827": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nl827" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603261 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532699 663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-sw5w9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-sw5w9" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603459 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532801 663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603665 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532864 663 reflector.go:138] object-"default"/"default-token-pgtqr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-pgtqr" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.603875 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532917 663 reflector.go:138] object-"kube-system"/"kube-proxy-token-tnwqj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-tnwqj" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.604167 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.532965 663 reflector.go:138] object-"kube-system"/"metrics-server-token-hcpl8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-hcpl8" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.604377 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:49 old-k8s-version-623695 kubelet[663]: E1209 11:25:49.533017 663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-623695" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-623695' and this object
W1209 11:31:20.614712 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.720038 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.614911 800461 logs.go:138] Found kubelet problem: Dec 09 11:25:51 old-k8s-version-623695 kubelet[663]: E1209 11:25:51.747865 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.617926 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:03 old-k8s-version-623695 kubelet[663]: E1209 11:26:03.558736 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.620029 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:14 old-k8s-version-623695 kubelet[663]: E1209 11:26:14.883179 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.620222 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.549936 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.620552 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:15 old-k8s-version-623695 kubelet[663]: E1209 11:26:15.890882 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.621216 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:19 old-k8s-version-623695 kubelet[663]: E1209 11:26:19.928539 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.621656 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:21 old-k8s-version-623695 kubelet[663]: E1209 11:26:21.930997 663 pod_workers.go:191] Error syncing pod a4b9e510-c334-4949-a8ad-1f3f41854e03 ("storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a4b9e510-c334-4949-a8ad-1f3f41854e03)"
W1209 11:31:20.624090 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:28 old-k8s-version-623695 kubelet[663]: E1209 11:26:28.556809 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.625177 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:33 old-k8s-version-623695 kubelet[663]: E1209 11:26:33.968067 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.625506 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:39 old-k8s-version-623695 kubelet[663]: E1209 11:26:39.927886 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.625700 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:42 old-k8s-version-623695 kubelet[663]: E1209 11:26:42.551170 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.626029 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:52 old-k8s-version-623695 kubelet[663]: E1209 11:26:52.546429 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.626212 800461 logs.go:138] Found kubelet problem: Dec 09 11:26:53 old-k8s-version-623695 kubelet[663]: E1209 11:26:53.546792 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.626396 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:04 old-k8s-version-623695 kubelet[663]: E1209 11:27:04.547503 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.626985 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:06 old-k8s-version-623695 kubelet[663]: E1209 11:27:06.138491 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.627309 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:09 old-k8s-version-623695 kubelet[663]: E1209 11:27:09.927608 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.629742 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:15 old-k8s-version-623695 kubelet[663]: E1209 11:27:15.558173 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.630068 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:22 old-k8s-version-623695 kubelet[663]: E1209 11:27:22.550284 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.630252 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:29 old-k8s-version-623695 kubelet[663]: E1209 11:27:29.559056 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.630574 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:35 old-k8s-version-623695 kubelet[663]: E1209 11:27:35.546799 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.630756 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:43 old-k8s-version-623695 kubelet[663]: E1209 11:27:43.546652 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.631337 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:47 old-k8s-version-623695 kubelet[663]: E1209 11:27:47.281093 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.631667 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:49 old-k8s-version-623695 kubelet[663]: E1209 11:27:49.927704 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.631851 800461 logs.go:138] Found kubelet problem: Dec 09 11:27:55 old-k8s-version-623695 kubelet[663]: E1209 11:27:55.546651 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.632178 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:03 old-k8s-version-623695 kubelet[663]: E1209 11:28:03.546208 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.632359 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:10 old-k8s-version-623695 kubelet[663]: E1209 11:28:10.546626 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.632692 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:15 old-k8s-version-623695 kubelet[663]: E1209 11:28:15.546835 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.632876 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:23 old-k8s-version-623695 kubelet[663]: E1209 11:28:23.546586 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.633206 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:26 old-k8s-version-623695 kubelet[663]: E1209 11:28:26.546890 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.635619 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:38 old-k8s-version-623695 kubelet[663]: E1209 11:28:38.563060 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
W1209 11:31:20.635943 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:40 old-k8s-version-623695 kubelet[663]: E1209 11:28:40.546787 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.636130 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:50 old-k8s-version-623695 kubelet[663]: E1209 11:28:50.546870 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.636454 800461 logs.go:138] Found kubelet problem: Dec 09 11:28:55 old-k8s-version-623695 kubelet[663]: E1209 11:28:55.546234 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.636636 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:03 old-k8s-version-623695 kubelet[663]: E1209 11:29:03.546823 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.637224 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:10 old-k8s-version-623695 kubelet[663]: E1209 11:29:10.508888 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.637408 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:17 old-k8s-version-623695 kubelet[663]: E1209 11:29:17.546618 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.637739 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:19 old-k8s-version-623695 kubelet[663]: E1209 11:29:19.928082 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.637921 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:28 old-k8s-version-623695 kubelet[663]: E1209 11:29:28.547234 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.638245 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:31 old-k8s-version-623695 kubelet[663]: E1209 11:29:31.546227 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.638429 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:41 old-k8s-version-623695 kubelet[663]: E1209 11:29:41.546721 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.638756 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:43 old-k8s-version-623695 kubelet[663]: E1209 11:29:43.546444 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.639078 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.547387 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.639264 800461 logs.go:138] Found kubelet problem: Dec 09 11:29:56 old-k8s-version-623695 kubelet[663]: E1209 11:29:56.548186 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.639608 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:09 old-k8s-version-623695 kubelet[663]: E1209 11:30:09.546239 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.639791 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:11 old-k8s-version-623695 kubelet[663]: E1209 11:30:11.546522 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.640114 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: E1209 11:30:20.547174 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.640296 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:26 old-k8s-version-623695 kubelet[663]: E1209 11:30:26.547093 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.640619 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: E1209 11:30:33.546231 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.640802 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:39 old-k8s-version-623695 kubelet[663]: E1209 11:30:39.546660 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.641125 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.641326 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.641653 800461 logs.go:138] Found kubelet problem: Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.641835 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.642158 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.642482 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.644887 800461 logs.go:138] Found kubelet problem: Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
I1209 11:31:20.644899 800461 logs.go:123] Gathering logs for etcd [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5] ...
I1209 11:31:20.644916 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5"
I1209 11:31:20.702944 800461 logs.go:123] Gathering logs for kube-proxy [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98] ...
I1209 11:31:20.702973 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98"
I1209 11:31:20.752172 800461 logs.go:123] Gathering logs for kube-controller-manager [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa] ...
I1209 11:31:20.752205 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa"
I1209 11:31:20.826676 800461 logs.go:123] Gathering logs for storage-provisioner [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465] ...
I1209 11:31:20.826715 800461 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465"
I1209 11:31:20.876353 800461 out.go:358] Setting ErrFile to fd 2...
I1209 11:31:20.876379 800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1209 11:31:20.876431 800461 out.go:270] X Problems detected in kubelet:
W1209 11:31:20.876458 800461 out.go:270] Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.876476 800461 out.go:270] Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1209 11:31:20.876492 800461 out.go:270] Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.876499 800461 out.go:270] Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
W1209 11:31:20.876505 800461 out.go:270] Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
I1209 11:31:20.876532 800461 out.go:358] Setting ErrFile to fd 2...
I1209 11:31:20.876539 800461 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1209 11:31:21.323236 811348 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20068-586689/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-545509:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.670168419s)
I1209 11:31:21.323267 811348 kic.go:203] duration metric: took 4.670315958s to extract preloaded images to volume ...
W1209 11:31:21.323430 811348 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1209 11:31:21.323543 811348 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1209 11:31:21.380950 811348 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-545509 --name embed-certs-545509 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-545509 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-545509 --network embed-certs-545509 --ip 192.168.76.2 --volume embed-certs-545509:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
I1209 11:31:21.728989 811348 cli_runner.go:164] Run: docker container inspect embed-certs-545509 --format={{.State.Running}}
I1209 11:31:21.750758 811348 cli_runner.go:164] Run: docker container inspect embed-certs-545509 --format={{.State.Status}}
I1209 11:31:21.789353 811348 cli_runner.go:164] Run: docker exec embed-certs-545509 stat /var/lib/dpkg/alternatives/iptables
I1209 11:31:21.843884 811348 oci.go:144] the created container "embed-certs-545509" has a running status.
I1209 11:31:21.843914 811348 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa...
I1209 11:31:22.070646 811348 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1209 11:31:22.101445 811348 cli_runner.go:164] Run: docker container inspect embed-certs-545509 --format={{.State.Status}}
I1209 11:31:22.135193 811348 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1209 11:31:22.135213 811348 kic_runner.go:114] Args: [docker exec --privileged embed-certs-545509 chown docker:docker /home/docker/.ssh/authorized_keys]
I1209 11:31:22.195833 811348 cli_runner.go:164] Run: docker container inspect embed-certs-545509 --format={{.State.Status}}
I1209 11:31:22.218411 811348 machine.go:93] provisionDockerMachine start ...
I1209 11:31:22.218518 811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
I1209 11:31:22.246110 811348 main.go:141] libmachine: Using SSH client type: native
I1209 11:31:22.246376 811348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33812 <nil> <nil>}
I1209 11:31:22.246385 811348 main.go:141] libmachine: About to run SSH command:
hostname
I1209 11:31:22.246995 811348 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38070->127.0.0.1:33812: read: connection reset by peer
I1209 11:31:25.374425 811348 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-545509
I1209 11:31:25.374448 811348 ubuntu.go:169] provisioning hostname "embed-certs-545509"
I1209 11:31:25.374512 811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
I1209 11:31:25.393078 811348 main.go:141] libmachine: Using SSH client type: native
I1209 11:31:25.393458 811348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33812 <nil> <nil>}
I1209 11:31:25.393482 811348 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-545509 && echo "embed-certs-545509" | sudo tee /etc/hostname
I1209 11:31:25.531738 811348 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-545509
I1209 11:31:25.531822 811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
I1209 11:31:25.550856 811348 main.go:141] libmachine: Using SSH client type: native
I1209 11:31:25.551119 811348 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415f50] 0x418790 <nil> [] 0s} 127.0.0.1 33812 <nil> <nil>}
I1209 11:31:25.551142 811348 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-545509' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-545509/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-545509' | sudo tee -a /etc/hosts;
fi
fi
I1209 11:31:25.677858 811348 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1209 11:31:25.677885 811348 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20068-586689/.minikube CaCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20068-586689/.minikube}
I1209 11:31:25.677910 811348 ubuntu.go:177] setting up certificates
I1209 11:31:25.677919 811348 provision.go:84] configureAuth start
I1209 11:31:25.677986 811348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-545509
I1209 11:31:25.696258 811348 provision.go:143] copyHostCerts
I1209 11:31:25.696335 811348 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem, removing ...
I1209 11:31:25.696345 811348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem
I1209 11:31:25.696427 811348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/ca.pem (1078 bytes)
I1209 11:31:25.696524 811348 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem, removing ...
I1209 11:31:25.696530 811348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem
I1209 11:31:25.696561 811348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/cert.pem (1123 bytes)
I1209 11:31:25.696624 811348 exec_runner.go:144] found /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem, removing ...
I1209 11:31:25.696629 811348 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem
I1209 11:31:25.696652 811348 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20068-586689/.minikube/key.pem (1679 bytes)
I1209 11:31:25.696697 811348 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem org=jenkins.embed-certs-545509 san=[127.0.0.1 192.168.76.2 embed-certs-545509 localhost minikube]
I1209 11:31:26.422450 811348 provision.go:177] copyRemoteCerts
I1209 11:31:26.422525 811348 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1209 11:31:26.422574 811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
I1209 11:31:26.440421 811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
I1209 11:31:26.536007 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1209 11:31:26.571091 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1209 11:31:26.599400 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1209 11:31:26.626277 811348 provision.go:87] duration metric: took 948.344347ms to configureAuth
I1209 11:31:26.626307 811348 ubuntu.go:193] setting minikube options for container-runtime
I1209 11:31:26.626488 811348 config.go:182] Loaded profile config "embed-certs-545509": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1209 11:31:26.626503 811348 machine.go:96] duration metric: took 4.408072193s to provisionDockerMachine
I1209 11:31:26.626510 811348 client.go:171] duration metric: took 10.836230736s to LocalClient.Create
I1209 11:31:26.626524 811348 start.go:167] duration metric: took 10.836291709s to libmachine.API.Create "embed-certs-545509"
I1209 11:31:26.626531 811348 start.go:293] postStartSetup for "embed-certs-545509" (driver="docker")
I1209 11:31:26.626540 811348 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1209 11:31:26.626592 811348 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1209 11:31:26.626640 811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
I1209 11:31:26.645080 811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
I1209 11:31:26.742716 811348 ssh_runner.go:195] Run: cat /etc/os-release
I1209 11:31:26.746489 811348 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1209 11:31:26.746531 811348 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1209 11:31:26.746543 811348 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1209 11:31:26.746550 811348 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1209 11:31:26.746564 811348 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-586689/.minikube/addons for local assets ...
I1209 11:31:26.746623 811348 filesync.go:126] Scanning /home/jenkins/minikube-integration/20068-586689/.minikube/files for local assets ...
I1209 11:31:26.746712 811348 filesync.go:149] local asset: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem -> 5920802.pem in /etc/ssl/certs
I1209 11:31:26.746822 811348 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1209 11:31:26.757971 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem --> /etc/ssl/certs/5920802.pem (1708 bytes)
I1209 11:31:26.783851 811348 start.go:296] duration metric: took 157.305438ms for postStartSetup
I1209 11:31:26.784226 811348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-545509
I1209 11:31:26.800714 811348 profile.go:143] Saving config to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/config.json ...
I1209 11:31:26.800994 811348 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1209 11:31:26.801037 811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
I1209 11:31:26.817981 811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
I1209 11:31:26.906915 811348 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1209 11:31:26.911888 811348 start.go:128] duration metric: took 11.125437687s to createHost
I1209 11:31:26.911914 811348 start.go:83] releasing machines lock for "embed-certs-545509", held for 11.125613698s
I1209 11:31:26.911991 811348 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-545509
I1209 11:31:26.929466 811348 ssh_runner.go:195] Run: cat /version.json
I1209 11:31:26.929494 811348 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1209 11:31:26.929520 811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
I1209 11:31:26.929563 811348 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-545509
I1209 11:31:26.949671 811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
I1209 11:31:26.952461 811348 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33812 SSHKeyPath:/home/jenkins/minikube-integration/20068-586689/.minikube/machines/embed-certs-545509/id_rsa Username:docker}
I1209 11:31:27.037278 811348 ssh_runner.go:195] Run: systemctl --version
I1209 11:31:27.176180 811348 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1209 11:31:27.180801 811348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1209 11:31:27.211666 811348 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1209 11:31:27.211747 811348 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1209 11:31:27.249415 811348 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I1209 11:31:27.249438 811348 start.go:495] detecting cgroup driver to use...
I1209 11:31:27.249471 811348 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1209 11:31:27.249519 811348 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1209 11:31:27.262805 811348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1209 11:31:27.275464 811348 docker.go:217] disabling cri-docker service (if available) ...
I1209 11:31:27.275533 811348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1209 11:31:27.291817 811348 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1209 11:31:27.309859 811348 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1209 11:31:27.408651 811348 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1209 11:31:27.513402 811348 docker.go:233] disabling docker service ...
I1209 11:31:27.513511 811348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1209 11:31:27.535928 811348 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1209 11:31:27.548633 811348 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1209 11:31:27.644733 811348 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1209 11:31:27.737886 811348 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1209 11:31:27.750512 811348 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1209 11:31:27.767670 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1209 11:31:27.777999 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1209 11:31:27.789858 811348 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1209 11:31:27.789941 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1209 11:31:27.801637 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 11:31:27.813406 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1209 11:31:27.825088 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1209 11:31:27.836077 811348 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1209 11:31:27.846723 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1209 11:31:27.857989 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1209 11:31:27.868882 811348 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1209 11:31:27.880009 811348 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1209 11:31:27.889409 811348 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1209 11:31:27.898379 811348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 11:31:27.993526 811348 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1209 11:31:28.167600 811348 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1209 11:31:28.167718 811348 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1209 11:31:28.171976 811348 start.go:563] Will wait 60s for crictl version
I1209 11:31:28.172058 811348 ssh_runner.go:195] Run: which crictl
I1209 11:31:28.176037 811348 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1209 11:31:28.222921 811348 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1209 11:31:28.223015 811348 ssh_runner.go:195] Run: containerd --version
I1209 11:31:28.255129 811348 ssh_runner.go:195] Run: containerd --version
I1209 11:31:28.284188 811348 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
I1209 11:31:28.286585 811348 cli_runner.go:164] Run: docker network inspect embed-certs-545509 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1209 11:31:28.307472 811348 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1209 11:31:28.312062 811348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 11:31:28.324563 811348 kubeadm.go:883] updating cluster {Name:embed-certs-545509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1209 11:31:28.324686 811348 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1209 11:31:28.324746 811348 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 11:31:28.364142 811348 containerd.go:627] all images are preloaded for containerd runtime.
I1209 11:31:28.364165 811348 containerd.go:534] Images already preloaded, skipping extraction
I1209 11:31:28.364232 811348 ssh_runner.go:195] Run: sudo crictl images --output json
I1209 11:31:28.405097 811348 containerd.go:627] all images are preloaded for containerd runtime.
I1209 11:31:28.405217 811348 cache_images.go:84] Images are preloaded, skipping loading
I1209 11:31:28.405241 811348 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.2 containerd true true} ...
I1209 11:31:28.405383 811348 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-545509 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1209 11:31:28.405494 811348 ssh_runner.go:195] Run: sudo crictl info
I1209 11:31:28.444085 811348 cni.go:84] Creating CNI manager for ""
I1209 11:31:28.444116 811348 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1209 11:31:28.444126 811348 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1209 11:31:28.444148 811348 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-545509 NodeName:embed-certs-545509 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1209 11:31:28.444266 811348 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-545509"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.76.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1209 11:31:28.444337 811348 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1209 11:31:28.456117 811348 binaries.go:44] Found k8s binaries, skipping transfer
I1209 11:31:28.456197 811348 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1209 11:31:28.467197 811348 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I1209 11:31:28.487075 811348 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1209 11:31:28.511070 811348 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I1209 11:31:28.531241 811348 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1209 11:31:28.535149 811348 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1209 11:31:28.547692 811348 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1209 11:31:28.641717 811348 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1209 11:31:28.657655 811348 certs.go:68] Setting up /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509 for IP: 192.168.76.2
I1209 11:31:28.657680 811348 certs.go:194] generating shared ca certs ...
I1209 11:31:28.657696 811348 certs.go:226] acquiring lock for ca certs: {Name:mkf9a6796a1bfe0d2ad344a1e9f65da735c51ff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:31:28.657830 811348 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20068-586689/.minikube/ca.key
I1209 11:31:28.657877 811348 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.key
I1209 11:31:28.657888 811348 certs.go:256] generating profile certs ...
I1209 11:31:28.657941 811348 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.key
I1209 11:31:28.657956 811348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.crt with IP's: []
I1209 11:31:28.782708 811348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.crt ...
I1209 11:31:28.782741 811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.crt: {Name:mka364cd8d3839fdd6533d20e8d536d60e039f51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:31:28.782955 811348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.key ...
I1209 11:31:28.782972 811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/client.key: {Name:mkbc8b7899b3bc89be7acc1f8207e69a33dbda78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:31:28.784523 811348 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key.4c36df8d
I1209 11:31:28.784548 811348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt.4c36df8d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
I1209 11:31:29.419151 811348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt.4c36df8d ...
I1209 11:31:29.419183 811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt.4c36df8d: {Name:mk92be70b93f0a8661973b22ba7ac43456a22b8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:31:29.419815 811348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key.4c36df8d ...
I1209 11:31:29.419837 811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key.4c36df8d: {Name:mk9fd297115da815d1944e03426e9507db08a458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:31:29.420349 811348 certs.go:381] copying /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt.4c36df8d -> /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt
I1209 11:31:29.420444 811348 certs.go:385] copying /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key.4c36df8d -> /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key
I1209 11:31:29.420509 811348 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.key
I1209 11:31:29.420532 811348 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.crt with IP's: []
I1209 11:31:29.893361 811348 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.crt ...
I1209 11:31:29.893392 811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.crt: {Name:mk679fcdcb2745d458b20bf94d17dad4654aac98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:31:29.894046 811348 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.key ...
I1209 11:31:29.894067 811348 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.key: {Name:mk9997d254905b89b2d988644e4b4963149eede8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1209 11:31:29.894845 811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080.pem (1338 bytes)
W1209 11:31:29.894891 811348 certs.go:480] ignoring /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080_empty.pem, impossibly tiny 0 bytes
I1209 11:31:29.894904 811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca-key.pem (1679 bytes)
I1209 11:31:29.894932 811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/ca.pem (1078 bytes)
I1209 11:31:29.894959 811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/cert.pem (1123 bytes)
I1209 11:31:29.894988 811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/certs/key.pem (1679 bytes)
I1209 11:31:29.895034 811348 certs.go:484] found cert: /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem (1708 bytes)
I1209 11:31:29.895678 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1209 11:31:29.927038 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1209 11:31:29.954215 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1209 11:31:29.982964 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I1209 11:31:30.055108 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I1209 11:31:30.093099 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1209 11:31:30.164474 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1209 11:31:30.202325 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/profiles/embed-certs-545509/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I1209 11:31:30.238147 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/files/etc/ssl/certs/5920802.pem --> /usr/share/ca-certificates/5920802.pem (1708 bytes)
I1209 11:31:30.268999 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1209 11:31:30.297111 811348 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20068-586689/.minikube/certs/592080.pem --> /usr/share/ca-certificates/592080.pem (1338 bytes)
I1209 11:31:30.323640 811348 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1209 11:31:30.343677 811348 ssh_runner.go:195] Run: openssl version
I1209 11:31:30.349726 811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5920802.pem && ln -fs /usr/share/ca-certificates/5920802.pem /etc/ssl/certs/5920802.pem"
I1209 11:31:30.360112 811348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5920802.pem
I1209 11:31:30.365072 811348 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 9 10:44 /usr/share/ca-certificates/5920802.pem
I1209 11:31:30.365176 811348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5920802.pem
I1209 11:31:30.373018 811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5920802.pem /etc/ssl/certs/3ec20f2e.0"
I1209 11:31:30.383167 811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1209 11:31:30.393293 811348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1209 11:31:30.397307 811348 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 9 10:37 /usr/share/ca-certificates/minikubeCA.pem
I1209 11:31:30.397432 811348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1209 11:31:30.404502 811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1209 11:31:30.415291 811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/592080.pem && ln -fs /usr/share/ca-certificates/592080.pem /etc/ssl/certs/592080.pem"
I1209 11:31:30.425269 811348 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/592080.pem
I1209 11:31:30.429247 811348 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 9 10:44 /usr/share/ca-certificates/592080.pem
I1209 11:31:30.429315 811348 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/592080.pem
I1209 11:31:30.437056 811348 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/592080.pem /etc/ssl/certs/51391683.0"
I1209 11:31:30.447450 811348 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1209 11:31:30.451138 811348 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1209 11:31:30.451204 811348 kubeadm.go:392] StartCluster: {Name:embed-certs-545509 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-545509 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1209 11:31:30.451296 811348 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1209 11:31:30.451362 811348 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1209 11:31:30.491466 811348 cri.go:89] found id: ""
I1209 11:31:30.491541 811348 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1209 11:31:30.501210 811348 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1209 11:31:30.511029 811348 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1209 11:31:30.511128 811348 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1209 11:31:30.521307 811348 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1209 11:31:30.521329 811348 kubeadm.go:157] found existing configuration files:
I1209 11:31:30.521412 811348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1209 11:31:30.531287 811348 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1209 11:31:30.531423 811348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1209 11:31:30.541088 811348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1209 11:31:30.555788 811348 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1209 11:31:30.555879 811348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1209 11:31:30.566424 811348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1209 11:31:30.578106 811348 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1209 11:31:30.578169 811348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1209 11:31:30.587370 811348 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1209 11:31:30.597057 811348 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1209 11:31:30.597200 811348 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1209 11:31:30.606883 811348 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1209 11:31:30.652403 811348 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
I1209 11:31:30.652753 811348 kubeadm.go:310] [preflight] Running pre-flight checks
I1209 11:31:30.681234 811348 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I1209 11:31:30.681403 811348 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1072-aws[0m
I1209 11:31:30.681477 811348 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I1209 11:31:30.681598 811348 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1209 11:31:30.681680 811348 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1209 11:31:30.681743 811348 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1209 11:31:30.681801 811348 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1209 11:31:30.681857 811348 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1209 11:31:30.681924 811348 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1209 11:31:30.681976 811348 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1209 11:31:30.682035 811348 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1209 11:31:30.682088 811348 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1209 11:31:30.745365 811348 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1209 11:31:30.745483 811348 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1209 11:31:30.745584 811348 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1209 11:31:30.752208 811348 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1209 11:31:30.876747 800461 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
I1209 11:31:30.888585 800461 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
ok
I1209 11:31:30.891227 800461 out.go:201]
W1209 11:31:30.893313 800461 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1209 11:31:30.893353 800461 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W1209 11:31:30.893369 800461 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W1209 11:31:30.893375 800461 out.go:270] *
W1209 11:31:30.894594 800461 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1209 11:31:30.897267 800461 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
c302b0a613922 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 3458921e081b3 dashboard-metrics-scraper-8d5bb5db8-96bls
663485e631397 ba04bb24b9575 4 minutes ago Running storage-provisioner 3 f043fdaf31a38 storage-provisioner
b5b12d4047e89 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 746b50167f421 kubernetes-dashboard-cd95d586-lgxbj
af856b017afeb db91994f4ee8f 5 minutes ago Running coredns 1 460be1a0f8119 coredns-74ff55c5b-pll5n
119643b0c4cdc 1611cd07b61d5 5 minutes ago Running busybox 1 e5a7568aeb256 busybox
91cb2ed43dfec 2be0bcf609c65 5 minutes ago Running kindnet-cni 1 230d93a98bc51 kindnet-82lzl
167b84f8f987c 25a5233254979 5 minutes ago Running kube-proxy 1 2bd4720d903eb kube-proxy-nftmg
1c864e51ed369 ba04bb24b9575 5 minutes ago Exited storage-provisioner 2 f043fdaf31a38 storage-provisioner
ae41a05fd4b11 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 fd9b36cd3d8ba kube-scheduler-old-k8s-version-623695
c841cccf0a5bd 05b738aa1bc63 5 minutes ago Running etcd 1 7a22b2c66dc58 etcd-old-k8s-version-623695
8afdcb5ba074e 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 9ae7792a3925c kube-controller-manager-old-k8s-version-623695
92b50938c8b97 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 5b29aa148a7e9 kube-apiserver-old-k8s-version-623695
d9234929f4e6d 1611cd07b61d5 6 minutes ago Exited busybox 0 162f9d9b1ade1 busybox
ed42b2a1e21f5 db91994f4ee8f 7 minutes ago Exited coredns 0 4761058ca1e45 coredns-74ff55c5b-pll5n
eb174547e0773 2be0bcf609c65 8 minutes ago Exited kindnet-cni 0 97745075a569e kindnet-82lzl
a9ed84f8cfb61 25a5233254979 8 minutes ago Exited kube-proxy 0 89921cf3dfc37 kube-proxy-nftmg
25fc7bce15ad2 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 897deba6c7c2f kube-controller-manager-old-k8s-version-623695
0a9c5fc2481f8 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 03db049c5ebd9 kube-scheduler-old-k8s-version-623695
8f5f2eca7e918 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 c85de1d1210e5 kube-apiserver-old-k8s-version-623695
2b8b97c8ef833 05b738aa1bc63 8 minutes ago Exited etcd 0 b8a4462606bd8 etcd-old-k8s-version-623695
==> containerd <==
Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.590347304Z" level=info msg="CreateContainer within sandbox \"3458921e081b390dcd48735929af3f8fdab4debf680f4d0f6aa078cf68e9316d\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\""
Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.592478591Z" level=info msg="StartContainer for \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\""
Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.678096054Z" level=info msg="StartContainer for \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\" returns successfully"
Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.717679570Z" level=info msg="shim disconnected" id=b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541 namespace=k8s.io
Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.717740585Z" level=warning msg="cleaning up after shim disconnected" id=b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541 namespace=k8s.io
Dec 09 11:27:46 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:46.717750726Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 09 11:27:47 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:47.283266462Z" level=info msg="RemoveContainer for \"e30bb1351656155c92a907fc07957340c1070203fe09cbeb70c1c6d72613432f\""
Dec 09 11:27:47 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:27:47.291480493Z" level=info msg="RemoveContainer for \"e30bb1351656155c92a907fc07957340c1070203fe09cbeb70c1c6d72613432f\" returns successfully"
Dec 09 11:28:38 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:28:38.548822044Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:28:38 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:28:38.557684534Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Dec 09 11:28:38 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:28:38.559740144Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Dec 09 11:28:38 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:28:38.559851629Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.548863527Z" level=info msg="CreateContainer within sandbox \"3458921e081b390dcd48735929af3f8fdab4debf680f4d0f6aa078cf68e9316d\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.566297896Z" level=info msg="CreateContainer within sandbox \"3458921e081b390dcd48735929af3f8fdab4debf680f4d0f6aa078cf68e9316d\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc\""
Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.567171314Z" level=info msg="StartContainer for \"c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc\""
Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.659611122Z" level=info msg="StartContainer for \"c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc\" returns successfully"
Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.686274729Z" level=info msg="shim disconnected" id=c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc namespace=k8s.io
Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.686336990Z" level=warning msg="cleaning up after shim disconnected" id=c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc namespace=k8s.io
Dec 09 11:29:09 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:09.686347124Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Dec 09 11:29:10 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:10.510562960Z" level=info msg="RemoveContainer for \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\""
Dec 09 11:29:10 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:29:10.516177974Z" level=info msg="RemoveContainer for \"b6ea58f85fa6c9fe28b9d5ab62a4632f105050afa91bff4a8cdbc9f1e49b1541\" returns successfully"
Dec 09 11:31:18 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:31:18.578641271Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:31:18 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:31:18.585964147Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
Dec 09 11:31:18 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:31:18.587709669Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Dec 09 11:31:18 old-k8s-version-623695 containerd[569]: time="2024-12-09T11:31:18.587810126Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [af856b017afebc2eff0a52c30b40b9ae7d18fbed9646a2cd62795c30eae0a468] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:34230 - 19146 "HINFO IN 5414614461809052501.8755579344058806465. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015021828s
==> coredns [ed42b2a1e21f5bc2262274dce47ab17b0351c70af13f20f07756dca1fc28f478] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:41342 - 38037 "HINFO IN 8371817319472522321.4202090601241178016. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011448122s
==> describe nodes <==
Name: old-k8s-version-623695
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-623695
kubernetes.io/os=linux
minikube.k8s.io/commit=60110addcdbf0fec7168b962521659e922988d6c
minikube.k8s.io/name=old-k8s-version-623695
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_12_09T11_22_51_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 09 Dec 2024 11:22:47 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-623695
AcquireTime: <unset>
RenewTime: Mon, 09 Dec 2024 11:31:32 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 09 Dec 2024 11:26:39 +0000 Mon, 09 Dec 2024 11:22:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 09 Dec 2024 11:26:39 +0000 Mon, 09 Dec 2024 11:22:41 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 09 Dec 2024 11:26:39 +0000 Mon, 09 Dec 2024 11:22:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 09 Dec 2024 11:26:39 +0000 Mon, 09 Dec 2024 11:23:07 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.85.2
Hostname: old-k8s-version-623695
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: f19115236a0b4b3092ac588db40ca2b7
System UUID: b96e3903-3a11-4691-9f38-ea41a76f2123
Boot ID: 5eb73f75-e518-45c7-ab7b-f59a572ccc61
Kernel Version: 5.15.0-1072-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m37s
kube-system coredns-74ff55c5b-pll5n 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m26s
kube-system etcd-old-k8s-version-623695 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m34s
kube-system kindnet-82lzl 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m26s
kube-system kube-apiserver-old-k8s-version-623695 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m34s
kube-system kube-controller-manager-old-k8s-version-623695 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m34s
kube-system kube-proxy-nftmg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m26s
kube-system kube-scheduler-old-k8s-version-623695 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m34s
kube-system metrics-server-9975d5f86-9pw69 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m24s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m24s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-96bls 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m26s
kubernetes-dashboard kubernetes-dashboard-cd95d586-lgxbj 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m26s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m54s (x5 over 8m54s) kubelet Node old-k8s-version-623695 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m54s (x5 over 8m54s) kubelet Node old-k8s-version-623695 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m54s (x4 over 8m54s) kubelet Node old-k8s-version-623695 status is now: NodeHasSufficientPID
Normal Starting 8m34s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m34s kubelet Node old-k8s-version-623695 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m34s kubelet Node old-k8s-version-623695 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m34s kubelet Node old-k8s-version-623695 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m34s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m26s kubelet Node old-k8s-version-623695 status is now: NodeReady
Normal Starting 8m23s kube-proxy Starting kube-proxy.
Normal Starting 5m55s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m55s (x8 over 5m55s) kubelet Node old-k8s-version-623695 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m55s (x8 over 5m55s) kubelet Node old-k8s-version-623695 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m55s (x7 over 5m55s) kubelet Node old-k8s-version-623695 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m55s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m41s kube-proxy Starting kube-proxy.
==> dmesg <==
==> etcd [2b8b97c8ef833f12b7f54ccb1f4f9e43ee5e96f62f8bb9ee22ef50e58e3ea81f] <==
raft2024/12/09 11:22:40 INFO: 9f0758e1c58a86ed became candidate at term 2
raft2024/12/09 11:22:40 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
raft2024/12/09 11:22:40 INFO: 9f0758e1c58a86ed became leader at term 2
raft2024/12/09 11:22:40 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
2024-12-09 11:22:40.747196 I | etcdserver: published {Name:old-k8s-version-623695 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
2024-12-09 11:22:40.747702 I | embed: ready to serve client requests
2024-12-09 11:22:40.747873 I | embed: ready to serve client requests
2024-12-09 11:22:40.747995 I | etcdserver: setting up the initial cluster version to 3.4
2024-12-09 11:22:40.748823 N | etcdserver/membership: set the initial cluster version to 3.4
2024-12-09 11:22:40.748933 I | etcdserver/api: enabled capabilities for version 3.4
2024-12-09 11:22:40.765189 I | embed: serving client requests on 127.0.0.1:2379
2024-12-09 11:22:40.765711 I | embed: serving client requests on 192.168.85.2:2379
2024-12-09 11:23:09.412670 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:23:09.609870 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:23:19.616096 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:23:29.609747 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:23:39.609903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:23:49.609789 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:23:59.610005 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:24:09.609834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:24:19.609865 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:24:29.609818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:24:39.609822 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:24:49.610132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:24:59.610619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [c841cccf0a5bdd1fe2fca556e5e36b880a686f41d9146ef7788f247bce61a9d5] <==
2024-12-09 11:27:29.016310 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:27:39.016446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:27:49.016550 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:27:59.016445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:28:09.016378 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:28:19.016212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:28:29.016464 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:28:39.016490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:28:49.016397 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:28:59.016311 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:29:09.016600 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:29:19.016554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:29:29.016299 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:29:39.016393 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:29:49.016423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:29:59.016436 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:30:09.016615 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:30:19.016452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:30:29.016559 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:30:39.016330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:30:49.016233 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:30:59.016332 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:31:09.022211 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:31:19.022051 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-12-09 11:31:29.016753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
11:31:33 up 4:14, 0 users, load average: 1.99, 3.03, 3.14
Linux old-k8s-version-623695 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [91cb2ed43dfec514bb35f6e6633f1b7bc69390d1eb0c98d7fd830339fc56e9e3] <==
I1209 11:29:32.829955 1 main.go:301] handling current node
I1209 11:29:42.830765 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:29:42.830802 1 main.go:301] handling current node
I1209 11:29:52.823506 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:29:52.823539 1 main.go:301] handling current node
I1209 11:30:02.829272 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:30:02.829309 1 main.go:301] handling current node
I1209 11:30:12.830813 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:30:12.830848 1 main.go:301] handling current node
I1209 11:30:22.829221 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:30:22.829257 1 main.go:301] handling current node
I1209 11:30:32.830368 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:30:32.830407 1 main.go:301] handling current node
I1209 11:30:42.830556 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:30:42.830595 1 main.go:301] handling current node
I1209 11:30:52.823419 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:30:52.823460 1 main.go:301] handling current node
I1209 11:31:02.829931 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:31:02.829970 1 main.go:301] handling current node
I1209 11:31:12.829217 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:31:12.829334 1 main.go:301] handling current node
I1209 11:31:22.831170 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:31:22.831207 1 main.go:301] handling current node
I1209 11:31:32.831423 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:31:32.831461 1 main.go:301] handling current node
==> kindnet [eb174547e077389a60898a71aa5ab17ac0877a9aa4886acc78d7d9454ff6ec44] <==
I1209 11:23:11.805276 1 shared_informer.go:320] Caches are synced for kube-network-policies
I1209 11:23:11.805305 1 metrics.go:61] Registering metrics
I1209 11:23:11.805348 1 controller.go:401] Syncing nftables rules
I1209 11:23:21.511652 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:23:21.511716 1 main.go:301] handling current node
I1209 11:23:31.502757 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:23:31.502796 1 main.go:301] handling current node
I1209 11:23:41.502275 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:23:41.502313 1 main.go:301] handling current node
I1209 11:23:51.508727 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:23:51.508840 1 main.go:301] handling current node
I1209 11:24:01.510833 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:24:01.510872 1 main.go:301] handling current node
I1209 11:24:11.503190 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:24:11.503300 1 main.go:301] handling current node
I1209 11:24:21.504411 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:24:21.504451 1 main.go:301] handling current node
I1209 11:24:31.509752 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:24:31.509791 1 main.go:301] handling current node
I1209 11:24:41.509243 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:24:41.509280 1 main.go:301] handling current node
I1209 11:24:51.510464 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:24:51.510598 1 main.go:301] handling current node
I1209 11:25:01.502783 1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
I1209 11:25:01.502818 1 main.go:301] handling current node
==> kube-apiserver [8f5f2eca7e918147351695f83e907d55498a88bcb6c566f93410605398939265] <==
I1209 11:22:48.503011 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1209 11:22:48.503168 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1209 11:22:48.541036 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1209 11:22:48.545830 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1209 11:22:48.545857 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1209 11:22:49.084246 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1209 11:22:49.131108 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1209 11:22:49.259811 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
I1209 11:22:49.262136 1 controller.go:606] quota admission added evaluator for: endpoints
I1209 11:22:49.266872 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1209 11:22:50.304616 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1209 11:22:51.019541 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1209 11:22:51.088983 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1209 11:22:59.513713 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1209 11:23:07.637132 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1209 11:23:07.798006 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1209 11:23:15.796932 1 client.go:360] parsed scheme: "passthrough"
I1209 11:23:15.796977 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:23:15.796986 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 11:23:56.690760 1 client.go:360] parsed scheme: "passthrough"
I1209 11:23:56.690807 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:23:56.690817 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 11:24:31.929626 1 client.go:360] parsed scheme: "passthrough"
I1209 11:24:31.929676 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:24:31.929685 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [92b50938c8b97d22b97b90d7211713e83c2e408b0a5bfa70a5b801a11893dfe2] <==
I1209 11:28:15.849479 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:28:15.849487 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1209 11:28:52.221165 1 handler_proxy.go:102] no RequestInfo found in the context
E1209 11:28:52.221439 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1209 11:28:52.221455 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1209 11:28:57.463042 1 client.go:360] parsed scheme: "passthrough"
I1209 11:28:57.463088 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:28:57.463097 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 11:29:38.108585 1 client.go:360] parsed scheme: "passthrough"
I1209 11:29:38.108639 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:29:38.108677 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 11:30:15.706643 1 client.go:360] parsed scheme: "passthrough"
I1209 11:30:15.706683 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:30:15.706692 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1209 11:30:50.571896 1 handler_proxy.go:102] no RequestInfo found in the context
E1209 11:30:50.571969 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1209 11:30:50.571979 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1209 11:30:58.165917 1 client.go:360] parsed scheme: "passthrough"
I1209 11:30:58.166159 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:30:58.166237 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1209 11:31:30.803352 1 client.go:360] parsed scheme: "passthrough"
I1209 11:31:30.803409 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1209 11:31:30.803418 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [25fc7bce15ad22f8de3baaeb0bb7f687bb2b3dc94e08ab8eb20977713310736b] <==
I1209 11:23:07.610131 1 taint_manager.go:187] Starting NoExecuteTaintManager
I1209 11:23:07.610877 1 event.go:291] "Event occurred" object="old-k8s-version-623695" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-623695 event: Registered Node old-k8s-version-623695 in Controller"
I1209 11:23:07.660911 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I1209 11:23:07.682472 1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-623695" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
I1209 11:23:07.692007 1 shared_informer.go:247] Caches are synced for resource quota
E1209 11:23:07.702607 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I1209 11:23:07.703597 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vc5sr"
E1209 11:23:07.710915 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I1209 11:23:07.728655 1 shared_informer.go:247] Caches are synced for daemon sets
I1209 11:23:07.729514 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-pll5n"
I1209 11:23:07.748346 1 shared_informer.go:247] Caches are synced for stateful set
I1209 11:23:07.758223 1 shared_informer.go:247] Caches are synced for resource quota
I1209 11:23:07.799767 1 shared_informer.go:247] Caches are synced for attach detach
I1209 11:23:07.842928 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-82lzl"
I1209 11:23:07.843161 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nftmg"
I1209 11:23:07.943393 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E1209 11:23:07.973322 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"5e46648a-9d67-4c9b-8708-582b05ba991c", ResourceVersion:"277", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63869340171, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241108-5c6d2daf\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000f65e00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000f65e20)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000f65e40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f65e60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f65e80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000f65ea0), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241108-5c6d2daf", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f65ec0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000f65f00)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40005786c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d7fda8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000accfc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400000f820)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d7fdf0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I1209 11:23:08.243533 1 shared_informer.go:247] Caches are synced for garbage collector
I1209 11:23:08.247616 1 shared_informer.go:247] Caches are synced for garbage collector
I1209 11:23:08.247654 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1209 11:23:09.154781 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I1209 11:23:09.202166 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-vc5sr"
I1209 11:23:12.610023 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I1209 11:25:08.248349 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E1209 11:25:08.473379 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
==> kube-controller-manager [8afdcb5ba074eac7202e301ff2b67765f72a8dd4e47a0100cf89910df022caaa] <==
W1209 11:27:13.253225 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 11:27:39.292675 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 11:27:44.903670 1 request.go:655] Throttling request took 1.048444704s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W1209 11:27:45.755451 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 11:28:09.794794 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 11:28:17.405994 1 request.go:655] Throttling request took 1.048015397s, request: GET:https://192.168.85.2:8443/apis/apps/v1?timeout=32s
W1209 11:28:18.257706 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 11:28:40.296656 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 11:28:49.908208 1 request.go:655] Throttling request took 1.048174157s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
W1209 11:28:50.815702 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 11:29:10.798744 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 11:29:22.466288 1 request.go:655] Throttling request took 1.048437012s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W1209 11:29:23.317791 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 11:29:41.300607 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 11:29:54.968456 1 request.go:655] Throttling request took 1.048379183s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W1209 11:29:55.820057 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 11:30:11.803012 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 11:30:27.470690 1 request.go:655] Throttling request took 1.048384419s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W1209 11:30:28.322316 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 11:30:42.306651 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 11:30:59.972790 1 request.go:655] Throttling request took 1.048360905s, request: GET:https://192.168.85.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
W1209 11:31:00.824339 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1209 11:31:12.810442 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1209 11:31:32.474803 1 request.go:655] Throttling request took 1.047353865s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W1209 11:31:33.326312 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-proxy [167b84f8f987c313cf51f0bcb5bd6488b6ad7b450cd789460a00b513af86bb98] <==
I1209 11:25:52.439850 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I1209 11:25:52.439947 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W1209 11:25:52.478250 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1209 11:25:52.478409 1 server_others.go:185] Using iptables Proxier.
I1209 11:25:52.478927 1 server.go:650] Version: v1.20.0
I1209 11:25:52.486176 1 config.go:224] Starting endpoint slice config controller
I1209 11:25:52.486240 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1209 11:25:52.486305 1 config.go:315] Starting service config controller
I1209 11:25:52.486309 1 shared_informer.go:240] Waiting for caches to sync for service config
I1209 11:25:52.586444 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1209 11:25:52.586963 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [a9ed84f8cfb61c9caff73d1ed13931a69b4186cabc325c3cc1809699aa1be74e] <==
I1209 11:23:10.235776 1 node.go:172] Successfully retrieved node IP: 192.168.85.2
I1209 11:23:10.235882 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
W1209 11:23:10.266372 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1209 11:23:10.266506 1 server_others.go:185] Using iptables Proxier.
I1209 11:23:10.266745 1 server.go:650] Version: v1.20.0
I1209 11:23:10.267194 1 config.go:315] Starting service config controller
I1209 11:23:10.267206 1 shared_informer.go:240] Waiting for caches to sync for service config
I1209 11:23:10.271620 1 config.go:224] Starting endpoint slice config controller
I1209 11:23:10.271646 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1209 11:23:10.367341 1 shared_informer.go:247] Caches are synced for service config
I1209 11:23:10.371877 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [0a9c5fc2481f8b0b984d6634dcbb9354b762189dacc2253907afe96468b89d47] <==
I1209 11:22:43.500659 1 serving.go:331] Generated self-signed cert in-memory
W1209 11:22:47.677424 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1209 11:22:47.677522 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1209 11:22:47.677553 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1209 11:22:47.677599 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1209 11:22:47.760482 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1209 11:22:47.764739 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 11:22:47.764811 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 11:22:47.764847 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1209 11:22:47.792689 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1209 11:22:47.800362 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1209 11:22:47.801777 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1209 11:22:47.802081 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1209 11:22:47.804474 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1209 11:22:47.804537 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1209 11:22:47.804881 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1209 11:22:47.805253 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1209 11:22:47.805674 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1209 11:22:47.805937 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1209 11:22:47.806943 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1209 11:22:47.825093 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1209 11:22:48.817461 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1209 11:22:49.079903 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I1209 11:22:52.264894 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [ae41a05fd4b1117afac3aa1f7764574fd0ed6427f9d72dc4841a9f2d948eeec3] <==
I1209 11:25:44.156405 1 serving.go:331] Generated self-signed cert in-memory
W1209 11:25:49.538567 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1209 11:25:49.538763 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1209 11:25:49.538835 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1209 11:25:49.538952 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1209 11:25:49.840908 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 11:25:49.840933 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1209 11:25:49.843350 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I1209 11:25:49.846025 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1209 11:25:50.046345 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Dec 09 11:30:09 old-k8s-version-623695 kubelet[663]: E1209 11:30:09.546239 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:30:11 old-k8s-version-623695 kubelet[663]: E1209 11:30:11.546522 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: I1209 11:30:20.546126 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
Dec 09 11:30:20 old-k8s-version-623695 kubelet[663]: E1209 11:30:20.547174 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:30:26 old-k8s-version-623695 kubelet[663]: E1209 11:30:26.547093 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: I1209 11:30:33.545850 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
Dec 09 11:30:33 old-k8s-version-623695 kubelet[663]: E1209 11:30:33.546231 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:30:39 old-k8s-version-623695 kubelet[663]: E1209 11:30:39.546660 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: I1209 11:30:44.545981 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
Dec 09 11:30:44 old-k8s-version-623695 kubelet[663]: E1209 11:30:44.548327 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:30:50 old-k8s-version-623695 kubelet[663]: E1209 11:30:50.546557 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: I1209 11:30:56.545987 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
Dec 09 11:30:56 old-k8s-version-623695 kubelet[663]: E1209 11:30:56.546828 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:31:05 old-k8s-version-623695 kubelet[663]: E1209 11:31:05.546612 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: I1209 11:31:07.546697 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
Dec 09 11:31:07 old-k8s-version-623695 kubelet[663]: E1209 11:31:07.547033 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: I1209 11:31:18.546437 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.546764 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.587963 663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588010 663 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588143 663 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-hcpl8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-9pw69_kube-system(827755a
c-0a74-439e-ac59-ea593199e1de): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
Dec 09 11:31:18 old-k8s-version-623695 kubelet[663]: E1209 11:31:18.588176 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
Dec 09 11:31:33 old-k8s-version-623695 kubelet[663]: I1209 11:31:33.546034 663 scope.go:95] [topologymanager] RemoveContainer - Container ID: c302b0a61392269d6463f3b12b8b9aad584620cc2a632866b4071902dbe225fc
Dec 09 11:31:33 old-k8s-version-623695 kubelet[663]: E1209 11:31:33.546430 663 pod_workers.go:191] Error syncing pod defbcdbb-0f56-41d1-badb-cadce4989c8d ("dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-96bls_kubernetes-dashboard(defbcdbb-0f56-41d1-badb-cadce4989c8d)"
Dec 09 11:31:33 old-k8s-version-623695 kubelet[663]: E1209 11:31:33.554210 663 pod_workers.go:191] Error syncing pod 827755ac-0a74-439e-ac59-ea593199e1de ("metrics-server-9975d5f86-9pw69_kube-system(827755ac-0a74-439e-ac59-ea593199e1de)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [b5b12d4047e8983ca175dce7ad530195e42f6c97736f3423b56ba4b67eee4cfd] <==
2024/12/09 11:26:17 Using namespace: kubernetes-dashboard
2024/12/09 11:26:17 Using in-cluster config to connect to apiserver
2024/12/09 11:26:17 Using secret token for csrf signing
2024/12/09 11:26:17 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/12/09 11:26:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/12/09 11:26:17 Successful initial request to the apiserver, version: v1.20.0
2024/12/09 11:26:17 Generating JWE encryption key
2024/12/09 11:26:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/12/09 11:26:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/12/09 11:26:17 Initializing JWE encryption key from synchronized object
2024/12/09 11:26:17 Creating in-cluster Sidecar client
2024/12/09 11:26:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:26:17 Serving insecurely on HTTP port: 9090
2024/12/09 11:26:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:27:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:27:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:28:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:28:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:29:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:29:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:30:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:30:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:31:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/12/09 11:26:17 Starting overwatch
==> storage-provisioner [1c864e51ed369180554c763558a548ef00ea435a92cc98a093c1261e29cf2bbf] <==
I1209 11:25:51.788405 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1209 11:26:21.791462 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [663485e6313972a7cfcbedd678323ecb2f884257c37073d5b2e988fad2ae5465] <==
I1209 11:26:33.718755 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1209 11:26:33.757611 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1209 11:26:33.757680 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1209 11:26:51.271686 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1209 11:26:51.278884 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-623695_172be10f-068d-4c38-abc5-2e361e4bd04d!
I1209 11:26:51.279362 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"817f54db-309c-430e-9890-d82edeb3c4de", APIVersion:"v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-623695_172be10f-068d-4c38-abc5-2e361e4bd04d became leader
I1209 11:26:51.385020 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-623695_172be10f-068d-4c38-abc5-2e361e4bd04d!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-623695 -n old-k8s-version-623695
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-623695 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-9pw69
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-623695 describe pod metrics-server-9975d5f86-9pw69
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-623695 describe pod metrics-server-9975d5f86-9pw69: exit status 1 (145.690003ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-9pw69" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-623695 describe pod metrics-server-9975d5f86-9pw69: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (373.16s)