=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-705847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-705847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m18.199616224s)
-- stdout --
* [old-k8s-version-705847] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20385
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
* Using the docker driver based on existing profile
* Starting "old-k8s-version-705847" primary control-plane node in "old-k8s-version-705847" cluster
* Pulling base image v0.0.46 ...
* Restarting existing docker container for "old-k8s-version-705847" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
* Verifying Kubernetes components...
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-705847 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
-- /stdout --
** stderr **
I0210 11:11:15.854169 792122 out.go:345] Setting OutFile to fd 1 ...
I0210 11:11:15.854414 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:11:15.854442 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:11:15.854464 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:11:15.854732 792122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
I0210 11:11:15.855143 792122 out.go:352] Setting JSON to false
I0210 11:11:15.856189 792122 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14021,"bootTime":1739171855,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0210 11:11:15.856293 792122 start.go:139] virtualization:
I0210 11:11:15.861413 792122 out.go:177] * [old-k8s-version-705847] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0210 11:11:15.864473 792122 notify.go:220] Checking for updates...
I0210 11:11:15.867546 792122 out.go:177] - MINIKUBE_LOCATION=20385
I0210 11:11:15.870370 792122 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0210 11:11:15.873166 792122 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
I0210 11:11:15.876049 792122 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
I0210 11:11:15.878962 792122 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0210 11:11:15.881652 792122 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0210 11:11:15.884853 792122 config.go:182] Loaded profile config "old-k8s-version-705847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0210 11:11:15.888460 792122 out.go:177] * Kubernetes 1.32.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.1
I0210 11:11:15.891248 792122 driver.go:394] Setting default libvirt URI to qemu:///system
I0210 11:11:15.926905 792122 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0210 11:11:15.927039 792122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0210 11:11:16.025922 792122 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:69 SystemTime:2025-02-10 11:11:16.013196367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0210 11:11:16.026050 792122 docker.go:318] overlay module found
I0210 11:11:16.029075 792122 out.go:177] * Using the docker driver based on existing profile
I0210 11:11:16.031808 792122 start.go:297] selected driver: docker
I0210 11:11:16.031835 792122 start.go:901] validating driver "docker" against &{Name:old-k8s-version-705847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/je
nkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0210 11:11:16.031955 792122 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0210 11:11:16.032694 792122 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0210 11:11:16.121670 792122 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:56 OomKillDisable:true NGoroutines:69 SystemTime:2025-02-10 11:11:16.110854411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0210 11:11:16.122055 792122 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0210 11:11:16.122080 792122 cni.go:84] Creating CNI manager for ""
I0210 11:11:16.122120 792122 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0210 11:11:16.122159 792122 start.go:340] cluster config:
{Name:old-k8s-version-705847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contai
nerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0210 11:11:16.125339 792122 out.go:177] * Starting "old-k8s-version-705847" primary control-plane node in "old-k8s-version-705847" cluster
I0210 11:11:16.128143 792122 cache.go:121] Beginning downloading kic base image for docker with containerd
I0210 11:11:16.131258 792122 out.go:177] * Pulling base image v0.0.46 ...
I0210 11:11:16.134026 792122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0210 11:11:16.134092 792122 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0210 11:11:16.134103 792122 cache.go:56] Caching tarball of preloaded images
I0210 11:11:16.134211 792122 preload.go:172] Found /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0210 11:11:16.134224 792122 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0210 11:11:16.134356 792122 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/config.json ...
I0210 11:11:16.134591 792122 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0210 11:11:16.162248 792122 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0210 11:11:16.162277 792122 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0210 11:11:16.162291 792122 cache.go:230] Successfully downloaded all kic artifacts
I0210 11:11:16.162314 792122 start.go:360] acquireMachinesLock for old-k8s-version-705847: {Name:mk6cce887f4e2ae32173ee31c8bf770fec39b41b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 11:11:16.162365 792122 start.go:364] duration metric: took 33.788µs to acquireMachinesLock for "old-k8s-version-705847"
I0210 11:11:16.162383 792122 start.go:96] Skipping create...Using existing machine configuration
I0210 11:11:16.162388 792122 fix.go:54] fixHost starting:
I0210 11:11:16.162640 792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
I0210 11:11:16.191549 792122 fix.go:112] recreateIfNeeded on old-k8s-version-705847: state=Stopped err=<nil>
W0210 11:11:16.191578 792122 fix.go:138] unexpected machine state, will restart: <nil>
I0210 11:11:16.194992 792122 out.go:177] * Restarting existing docker container for "old-k8s-version-705847" ...
I0210 11:11:16.197848 792122 cli_runner.go:164] Run: docker start old-k8s-version-705847
I0210 11:11:16.569523 792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
I0210 11:11:16.593452 792122 kic.go:430] container "old-k8s-version-705847" state is running.
I0210 11:11:16.593947 792122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-705847
I0210 11:11:16.620893 792122 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/config.json ...
I0210 11:11:16.621115 792122 machine.go:93] provisionDockerMachine start ...
I0210 11:11:16.621191 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:16.647808 792122 main.go:141] libmachine: Using SSH client type: native
I0210 11:11:16.648075 792122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil> [] 0s} 127.0.0.1 33798 <nil> <nil>}
I0210 11:11:16.648091 792122 main.go:141] libmachine: About to run SSH command:
hostname
I0210 11:11:16.649710 792122 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0210 11:11:19.784831 792122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-705847
I0210 11:11:19.784909 792122 ubuntu.go:169] provisioning hostname "old-k8s-version-705847"
I0210 11:11:19.784995 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:19.811436 792122 main.go:141] libmachine: Using SSH client type: native
I0210 11:11:19.811683 792122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil> [] 0s} 127.0.0.1 33798 <nil> <nil>}
I0210 11:11:19.811695 792122 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-705847 && echo "old-k8s-version-705847" | sudo tee /etc/hostname
I0210 11:11:19.968816 792122 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-705847
I0210 11:11:19.968987 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:19.995722 792122 main.go:141] libmachine: Using SSH client type: native
I0210 11:11:19.996000 792122 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil> [] 0s} 127.0.0.1 33798 <nil> <nil>}
I0210 11:11:19.996018 792122 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-705847' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-705847/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-705847' | sudo tee -a /etc/hosts;
fi
fi
I0210 11:11:20.134280 792122 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0210 11:11:20.134372 792122 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20385-576242/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-576242/.minikube}
I0210 11:11:20.134446 792122 ubuntu.go:177] setting up certificates
I0210 11:11:20.134481 792122 provision.go:84] configureAuth start
I0210 11:11:20.134562 792122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-705847
I0210 11:11:20.165701 792122 provision.go:143] copyHostCerts
I0210 11:11:20.165771 792122 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem, removing ...
I0210 11:11:20.165781 792122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem
I0210 11:11:20.165863 792122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem (1078 bytes)
I0210 11:11:20.165974 792122 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem, removing ...
I0210 11:11:20.165980 792122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem
I0210 11:11:20.166009 792122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem (1123 bytes)
I0210 11:11:20.166073 792122 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem, removing ...
I0210 11:11:20.166078 792122 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem
I0210 11:11:20.166102 792122 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem (1679 bytes)
I0210 11:11:20.166159 792122 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-705847 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-705847]
I0210 11:11:20.587091 792122 provision.go:177] copyRemoteCerts
I0210 11:11:20.587207 792122 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0210 11:11:20.587265 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:20.605581 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:20.706961 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0210 11:11:20.748289 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0210 11:11:20.791018 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0210 11:11:20.835241 792122 provision.go:87] duration metric: took 700.735484ms to configureAuth
I0210 11:11:20.835270 792122 ubuntu.go:193] setting minikube options for container-runtime
I0210 11:11:20.835476 792122 config.go:182] Loaded profile config "old-k8s-version-705847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0210 11:11:20.835490 792122 machine.go:96] duration metric: took 4.214359793s to provisionDockerMachine
I0210 11:11:20.835499 792122 start.go:293] postStartSetup for "old-k8s-version-705847" (driver="docker")
I0210 11:11:20.835516 792122 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0210 11:11:20.835573 792122 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0210 11:11:20.835624 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:20.869011 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:20.963656 792122 ssh_runner.go:195] Run: cat /etc/os-release
I0210 11:11:20.967660 792122 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0210 11:11:20.967701 792122 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0210 11:11:20.967713 792122 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0210 11:11:20.967721 792122 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0210 11:11:20.967734 792122 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-576242/.minikube/addons for local assets ...
I0210 11:11:20.967794 792122 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-576242/.minikube/files for local assets ...
I0210 11:11:20.967878 792122 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem -> 5816292.pem in /etc/ssl/certs
I0210 11:11:20.968001 792122 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0210 11:11:20.978889 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem --> /etc/ssl/certs/5816292.pem (1708 bytes)
I0210 11:11:21.008291 792122 start.go:296] duration metric: took 172.771886ms for postStartSetup
I0210 11:11:21.008402 792122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0210 11:11:21.008470 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:21.037623 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:21.134522 792122 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0210 11:11:21.142110 792122 fix.go:56] duration metric: took 4.979713355s for fixHost
I0210 11:11:21.142147 792122 start.go:83] releasing machines lock for "old-k8s-version-705847", held for 4.979774146s
I0210 11:11:21.142239 792122 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-705847
I0210 11:11:21.177817 792122 ssh_runner.go:195] Run: cat /version.json
I0210 11:11:21.177878 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:21.178151 792122 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0210 11:11:21.178225 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:21.217746 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:21.218527 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:21.480083 792122 ssh_runner.go:195] Run: systemctl --version
I0210 11:11:21.492916 792122 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0210 11:11:21.497852 792122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0210 11:11:21.543157 792122 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0210 11:11:21.543275 792122 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0210 11:11:21.559960 792122 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0210 11:11:21.559988 792122 start.go:495] detecting cgroup driver to use...
I0210 11:11:21.560053 792122 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0210 11:11:21.560126 792122 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0210 11:11:21.579682 792122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0210 11:11:21.597064 792122 docker.go:217] disabling cri-docker service (if available) ...
I0210 11:11:21.597162 792122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0210 11:11:21.618078 792122 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0210 11:11:21.635461 792122 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0210 11:11:21.805663 792122 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0210 11:11:21.950950 792122 docker.go:233] disabling docker service ...
I0210 11:11:21.951058 792122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0210 11:11:21.964888 792122 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0210 11:11:21.977757 792122 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0210 11:11:22.145793 792122 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0210 11:11:22.290886 792122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0210 11:11:22.307242 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0210 11:11:22.336235 792122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0210 11:11:22.347968 792122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0210 11:11:22.359251 792122 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0210 11:11:22.359351 792122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0210 11:11:22.373782 792122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0210 11:11:22.391129 792122 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0210 11:11:22.403898 792122 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0210 11:11:22.419027 792122 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0210 11:11:22.431573 792122 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0210 11:11:22.443361 792122 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0210 11:11:22.455683 792122 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0210 11:11:22.463989 792122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0210 11:11:22.619380 792122 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0210 11:11:22.860525 792122 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0210 11:11:22.860623 792122 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0210 11:11:22.865569 792122 start.go:563] Will wait 60s for crictl version
I0210 11:11:22.865662 792122 ssh_runner.go:195] Run: which crictl
I0210 11:11:22.874367 792122 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0210 11:11:22.979096 792122 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0210 11:11:22.979194 792122 ssh_runner.go:195] Run: containerd --version
I0210 11:11:23.006049 792122 ssh_runner.go:195] Run: containerd --version
I0210 11:11:23.032612 792122 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.24 ...
I0210 11:11:23.035483 792122 cli_runner.go:164] Run: docker network inspect old-k8s-version-705847 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0210 11:11:23.056374 792122 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0210 11:11:23.060980 792122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0210 11:11:23.072675 792122 kubeadm.go:883] updating cluster {Name:old-k8s-version-705847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0210 11:11:23.072806 792122 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0210 11:11:23.072868 792122 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 11:11:23.126444 792122 containerd.go:627] all images are preloaded for containerd runtime.
I0210 11:11:23.126472 792122 containerd.go:534] Images already preloaded, skipping extraction
I0210 11:11:23.126540 792122 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 11:11:23.171932 792122 containerd.go:627] all images are preloaded for containerd runtime.
I0210 11:11:23.171956 792122 cache_images.go:84] Images are preloaded, skipping loading
I0210 11:11:23.171965 792122 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0210 11:11:23.172120 792122 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-705847 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0210 11:11:23.172206 792122 ssh_runner.go:195] Run: sudo crictl info
I0210 11:11:23.231989 792122 cni.go:84] Creating CNI manager for ""
I0210 11:11:23.232018 792122 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0210 11:11:23.232033 792122 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0210 11:11:23.232055 792122 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-705847 NodeName:old-k8s-version-705847 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0210 11:11:23.232196 792122 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-705847"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0210 11:11:23.232269 792122 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0210 11:11:23.247020 792122 binaries.go:44] Found k8s binaries, skipping transfer
I0210 11:11:23.247115 792122 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0210 11:11:23.256261 792122 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0210 11:11:23.275387 792122 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0210 11:11:23.295903 792122 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0210 11:11:23.315336 792122 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0210 11:11:23.319026 792122 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0210 11:11:23.331205 792122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0210 11:11:23.440997 792122 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0210 11:11:23.458082 792122 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847 for IP: 192.168.76.2
I0210 11:11:23.458105 792122 certs.go:194] generating shared ca certs ...
I0210 11:11:23.458121 792122 certs.go:226] acquiring lock for ca certs: {Name:mk41210dcb5a25827819de2f65fc930debb2adb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:11:23.458327 792122 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.key
I0210 11:11:23.458397 792122 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.key
I0210 11:11:23.458412 792122 certs.go:256] generating profile certs ...
I0210 11:11:23.458516 792122 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/client.key
I0210 11:11:23.458611 792122 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/apiserver.key.135f3f41
I0210 11:11:23.458701 792122 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/proxy-client.key
I0210 11:11:23.458860 792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629.pem (1338 bytes)
W0210 11:11:23.458916 792122 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629_empty.pem, impossibly tiny 0 bytes
I0210 11:11:23.458932 792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem (1679 bytes)
I0210 11:11:23.458973 792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem (1078 bytes)
I0210 11:11:23.459027 792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem (1123 bytes)
I0210 11:11:23.459064 792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem (1679 bytes)
I0210 11:11:23.459142 792122 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem (1708 bytes)
I0210 11:11:23.459782 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0210 11:11:23.536925 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0210 11:11:23.621168 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0210 11:11:23.689098 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0210 11:11:23.725956 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0210 11:11:23.765962 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0210 11:11:23.807991 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0210 11:11:23.858636 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/old-k8s-version-705847/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0210 11:11:23.897694 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem --> /usr/share/ca-certificates/5816292.pem (1708 bytes)
I0210 11:11:23.944379 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0210 11:11:23.985045 792122 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629.pem --> /usr/share/ca-certificates/581629.pem (1338 bytes)
I0210 11:11:24.036575 792122 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0210 11:11:24.073725 792122 ssh_runner.go:195] Run: openssl version
I0210 11:11:24.083331 792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/581629.pem && ln -fs /usr/share/ca-certificates/581629.pem /etc/ssl/certs/581629.pem"
I0210 11:11:24.103811 792122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/581629.pem
I0210 11:11:24.112042 792122 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:32 /usr/share/ca-certificates/581629.pem
I0210 11:11:24.112156 792122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/581629.pem
I0210 11:11:24.126986 792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/581629.pem /etc/ssl/certs/51391683.0"
I0210 11:11:24.145917 792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5816292.pem && ln -fs /usr/share/ca-certificates/5816292.pem /etc/ssl/certs/5816292.pem"
I0210 11:11:24.160706 792122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5816292.pem
I0210 11:11:24.164475 792122 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:32 /usr/share/ca-certificates/5816292.pem
I0210 11:11:24.164586 792122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5816292.pem
I0210 11:11:24.175705 792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5816292.pem /etc/ssl/certs/3ec20f2e.0"
I0210 11:11:24.189373 792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0210 11:11:24.198520 792122 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0210 11:11:24.205612 792122 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
I0210 11:11:24.205713 792122 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0210 11:11:24.215619 792122 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0210 11:11:24.224354 792122 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0210 11:11:24.233235 792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0210 11:11:24.244555 792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0210 11:11:24.252060 792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0210 11:11:24.262036 792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0210 11:11:24.269421 792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0210 11:11:24.283495 792122 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0210 11:11:24.293463 792122 kubeadm.go:392] StartCluster: {Name:old-k8s-version-705847 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-705847 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0210 11:11:24.293640 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0210 11:11:24.293728 792122 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0210 11:11:24.390830 792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:11:24.390915 792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:11:24.390935 792122 cri.go:89] found id: "3a1155cdb6488532d05c7f84248ca7fed91cf6700ec92941d37ec310ac01c20e"
I0210 11:11:24.390954 792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:11:24.390985 792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:11:24.391004 792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:11:24.391022 792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:11:24.391041 792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:11:24.391075 792122 cri.go:89] found id: ""
I0210 11:11:24.391161 792122 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
W0210 11:11:24.407017 792122 kubeadm.go:399] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
stdout:
stderr:
time="2025-02-10T11:11:24Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
I0210 11:11:24.407151 792122 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0210 11:11:24.418766 792122 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0210 11:11:24.418844 792122 kubeadm.go:593] restartPrimaryControlPlane start ...
I0210 11:11:24.418925 792122 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0210 11:11:24.432478 792122 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0210 11:11:24.433021 792122 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-705847" does not appear in /home/jenkins/minikube-integration/20385-576242/kubeconfig
I0210 11:11:24.433184 792122 kubeconfig.go:62] /home/jenkins/minikube-integration/20385-576242/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-705847" cluster setting kubeconfig missing "old-k8s-version-705847" context setting]
I0210 11:11:24.433600 792122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/kubeconfig: {Name:mkb94ed977d6ca716789df506e8beb4caa6483af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:11:24.435185 792122 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0210 11:11:24.447496 792122 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0210 11:11:24.447575 792122 kubeadm.go:597] duration metric: took 28.708683ms to restartPrimaryControlPlane
I0210 11:11:24.447599 792122 kubeadm.go:394] duration metric: took 154.146907ms to StartCluster
I0210 11:11:24.447637 792122 settings.go:142] acquiring lock: {Name:mk7602bd83375ef51e640bdffea1b5615cccb289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:11:24.447719 792122 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20385-576242/kubeconfig
I0210 11:11:24.448363 792122 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/kubeconfig: {Name:mkb94ed977d6ca716789df506e8beb4caa6483af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:11:24.449352 792122 config.go:182] Loaded profile config "old-k8s-version-705847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0210 11:11:24.449428 792122 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0210 11:11:24.449493 792122 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0210 11:11:24.449869 792122 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-705847"
I0210 11:11:24.449888 792122 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-705847"
W0210 11:11:24.449895 792122 addons.go:247] addon storage-provisioner should already be in state true
I0210 11:11:24.449944 792122 host.go:66] Checking if "old-k8s-version-705847" exists ...
I0210 11:11:24.450507 792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
I0210 11:11:24.450741 792122 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-705847"
I0210 11:11:24.450777 792122 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-705847"
I0210 11:11:24.451238 792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
I0210 11:11:24.452288 792122 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-705847"
I0210 11:11:24.452308 792122 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-705847"
W0210 11:11:24.452316 792122 addons.go:247] addon metrics-server should already be in state true
I0210 11:11:24.452348 792122 host.go:66] Checking if "old-k8s-version-705847" exists ...
I0210 11:11:24.452758 792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
I0210 11:11:24.453089 792122 addons.go:69] Setting dashboard=true in profile "old-k8s-version-705847"
I0210 11:11:24.453127 792122 addons.go:238] Setting addon dashboard=true in "old-k8s-version-705847"
W0210 11:11:24.453138 792122 addons.go:247] addon dashboard should already be in state true
I0210 11:11:24.453164 792122 host.go:66] Checking if "old-k8s-version-705847" exists ...
I0210 11:11:24.453680 792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
I0210 11:11:24.460143 792122 out.go:177] * Verifying Kubernetes components...
I0210 11:11:24.465674 792122 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0210 11:11:24.507561 792122 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0210 11:11:24.511359 792122 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0210 11:11:24.514349 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0210 11:11:24.514376 792122 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0210 11:11:24.514452 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:24.523507 792122 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-705847"
W0210 11:11:24.523530 792122 addons.go:247] addon default-storageclass should already be in state true
I0210 11:11:24.523555 792122 host.go:66] Checking if "old-k8s-version-705847" exists ...
I0210 11:11:24.523956 792122 cli_runner.go:164] Run: docker container inspect old-k8s-version-705847 --format={{.State.Status}}
I0210 11:11:24.546433 792122 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0210 11:11:24.552501 792122 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0210 11:11:24.552521 792122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0210 11:11:24.552588 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:24.561561 792122 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0210 11:11:24.569573 792122 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0210 11:11:24.569610 792122 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0210 11:11:24.569680 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:24.583548 792122 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0210 11:11:24.583583 792122 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0210 11:11:24.583656 792122 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-705847
I0210 11:11:24.603286 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:24.624884 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:24.655484 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:24.656660 792122 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33798 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/old-k8s-version-705847/id_rsa Username:docker}
I0210 11:11:24.797935 792122 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0210 11:11:24.853207 792122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0210 11:11:24.853227 792122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0210 11:11:24.860987 792122 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-705847" to be "Ready" ...
I0210 11:11:24.899619 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0210 11:11:24.899642 792122 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0210 11:11:24.902356 792122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0210 11:11:24.902376 792122 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0210 11:11:24.940209 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0210 11:11:24.956611 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0210 11:11:24.956634 792122 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0210 11:11:24.972760 792122 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:24.972784 792122 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0210 11:11:24.989039 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0210 11:11:25.038226 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:25.048383 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0210 11:11:25.048410 792122 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0210 11:11:25.210003 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0210 11:11:25.210027 792122 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
W0210 11:11:25.258029 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.258066 792122 retry.go:31] will retry after 354.239843ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.304724 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0210 11:11:25.304746 792122 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W0210 11:11:25.318737 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.318794 792122 retry.go:31] will retry after 165.988594ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.356062 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0210 11:11:25.356087 792122 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
W0210 11:11:25.358528 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.358558 792122 retry.go:31] will retry after 218.751579ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.380342 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0210 11:11:25.380407 792122 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0210 11:11:25.403764 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0210 11:11:25.403790 792122 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0210 11:11:25.422120 792122 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0210 11:11:25.422144 792122 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0210 11:11:25.441134 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0210 11:11:25.485496 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0210 11:11:25.530016 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.530050 792122 retry.go:31] will retry after 272.072779ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:25.568506 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.568587 792122 retry.go:31] will retry after 436.399785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.577736 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:25.613075 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0210 11:11:25.719121 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.719159 792122 retry.go:31] will retry after 218.411415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:25.748822 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.748859 792122 retry.go:31] will retry after 286.400128ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.803180 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0210 11:11:25.904297 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.904340 792122 retry.go:31] will retry after 279.923457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:25.938645 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:26.006045 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0210 11:11:26.036399 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0210 11:11:26.060096 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.060130 792122 retry.go:31] will retry after 704.765952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.184442 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0210 11:11:26.295511 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.295546 792122 retry.go:31] will retry after 443.099927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:26.295643 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.295677 792122 retry.go:31] will retry after 680.096408ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:26.363775 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.363808 792122 retry.go:31] will retry after 708.016662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.739422 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0210 11:11:26.765814 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:26.862494 792122 node_ready.go:53] error getting node "old-k8s-version-705847": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-705847": dial tcp 192.168.76.2:8443: connect: connection refused
W0210 11:11:26.901986 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.902022 792122 retry.go:31] will retry after 1.10804755s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:26.921795 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.921879 792122 retry.go:31] will retry after 1.043194883s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:26.976183 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0210 11:11:27.072634 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0210 11:11:27.090874 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:27.090984 792122 retry.go:31] will retry after 475.282466ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:27.194195 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:27.194290 792122 retry.go:31] will retry after 744.668813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:27.567465 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0210 11:11:27.642188 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:27.642221 792122 retry.go:31] will retry after 1.775042521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:27.940140 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0210 11:11:27.965531 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:28.010931 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0210 11:11:28.036378 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:28.036414 792122 retry.go:31] will retry after 830.931937ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:28.081898 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:28.081934 792122 retry.go:31] will retry after 1.127697549s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:28.117427 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:28.117462 792122 retry.go:31] will retry after 1.115774173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:28.868310 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0210 11:11:28.950510 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:28.950542 792122 retry.go:31] will retry after 2.371448727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:29.209968 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:29.234273 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0210 11:11:29.299885 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:29.299920 792122 retry.go:31] will retry after 2.749982384s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:29.338478 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:29.338513 792122 retry.go:31] will retry after 957.280972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:29.362077 792122 node_ready.go:53] error getting node "old-k8s-version-705847": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-705847": dial tcp 192.168.76.2:8443: connect: connection refused
I0210 11:11:29.418394 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0210 11:11:29.490524 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:29.490561 792122 retry.go:31] will retry after 2.694787037s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:30.296009 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0210 11:11:30.402862 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:30.402939 792122 retry.go:31] will retry after 2.613930879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:31.322578 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0210 11:11:31.362226 792122 node_ready.go:53] error getting node "old-k8s-version-705847": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-705847": dial tcp 192.168.76.2:8443: connect: connection refused
W0210 11:11:31.448771 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:31.448800 792122 retry.go:31] will retry after 3.556165586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:32.050740 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:32.186145 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0210 11:11:32.286620 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:32.286649 792122 retry.go:31] will retry after 3.688898467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0210 11:11:32.344854 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:32.344886 792122 retry.go:31] will retry after 2.718862749s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:33.017391 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0210 11:11:33.304223 792122 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:33.304253 792122 retry.go:31] will retry after 5.591745868s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0210 11:11:35.006023 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0210 11:11:35.063907 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0210 11:11:35.975696 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0210 11:11:38.897639 792122 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0210 11:11:42.497806 792122 node_ready.go:49] node "old-k8s-version-705847" has status "Ready":"True"
I0210 11:11:42.497829 792122 node_ready.go:38] duration metric: took 17.636742451s for node "old-k8s-version-705847" to be "Ready" ...
I0210 11:11:42.497841 792122 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0210 11:11:42.570572 792122 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-7fkgl" in "kube-system" namespace to be "Ready" ...
I0210 11:11:42.790395 792122 pod_ready.go:93] pod "coredns-74ff55c5b-7fkgl" in "kube-system" namespace has status "Ready":"True"
I0210 11:11:42.790465 792122 pod_ready.go:82] duration metric: took 219.813767ms for pod "coredns-74ff55c5b-7fkgl" in "kube-system" namespace to be "Ready" ...
I0210 11:11:42.790492 792122 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
I0210 11:11:42.876797 792122 pod_ready.go:93] pod "etcd-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"True"
I0210 11:11:42.876869 792122 pod_ready.go:82] duration metric: took 86.355801ms for pod "etcd-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
I0210 11:11:42.876899 792122 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
I0210 11:11:42.888623 792122 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"True"
I0210 11:11:42.888697 792122 pod_ready.go:82] duration metric: took 11.777178ms for pod "kube-apiserver-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
I0210 11:11:42.888725 792122 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
I0210 11:11:43.959052 792122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.952972726s)
I0210 11:11:43.959299 792122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.895364656s)
I0210 11:11:43.959418 792122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.983690912s)
I0210 11:11:43.959459 792122 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-705847"
I0210 11:11:43.959512 792122 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.061849816s)
I0210 11:11:43.963300 792122 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-705847 addons enable metrics-server
I0210 11:11:43.967878 792122 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I0210 11:11:43.970836 792122 addons.go:514] duration metric: took 19.521343547s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I0210 11:11:44.893807 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:11:46.897527 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:11:49.394509 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:11:51.894013 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:11:53.895036 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:11:55.904815 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:11:58.393438 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:00.416706 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:02.894788 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:04.896093 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:06.896817 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:09.395491 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:11.894043 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:13.894340 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:16.394012 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:18.394251 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:20.894790 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:22.897293 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:25.394833 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:27.394878 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:29.396928 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:31.894532 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:34.394217 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:36.395032 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:38.396249 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:40.894450 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:42.901194 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:45.395423 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:47.895201 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:50.394554 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:52.894148 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:54.895047 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:56.895251 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:12:59.394365 792122 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:00.419446 792122 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"True"
I0210 11:13:00.419487 792122 pod_ready.go:82] duration metric: took 1m17.530741501s for pod "kube-controller-manager-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
I0210 11:13:00.419505 792122 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qt8rk" in "kube-system" namespace to be "Ready" ...
I0210 11:13:00.425620 792122 pod_ready.go:93] pod "kube-proxy-qt8rk" in "kube-system" namespace has status "Ready":"True"
I0210 11:13:00.425648 792122 pod_ready.go:82] duration metric: took 6.132546ms for pod "kube-proxy-qt8rk" in "kube-system" namespace to be "Ready" ...
I0210 11:13:00.425662 792122 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
I0210 11:13:02.431657 792122 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:04.931083 792122 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:06.931948 792122 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:09.430693 792122 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace has status "Ready":"True"
I0210 11:13:09.430718 792122 pod_ready.go:82] duration metric: took 9.005047393s for pod "kube-scheduler-old-k8s-version-705847" in "kube-system" namespace to be "Ready" ...
I0210 11:13:09.430731 792122 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace to be "Ready" ...
I0210 11:13:11.436362 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:13.436582 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:15.936385 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:17.936877 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:20.435894 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:22.436003 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:24.436351 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:26.936498 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:29.436682 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:31.936793 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:33.937359 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:36.437798 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:38.936440 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:40.937292 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:43.436209 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:45.436616 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:47.937193 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:50.436291 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:52.436723 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:54.936588 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:57.435458 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:13:59.436657 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:01.936887 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:03.937487 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:06.436671 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:08.941715 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:11.436959 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:13.497480 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:15.937941 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:18.436313 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:20.935995 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:22.936126 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:25.436984 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:27.936659 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:30.436634 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:32.436812 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:34.936971 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:37.437007 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:39.437145 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:41.935991 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:44.437013 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:46.437311 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:48.936228 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:50.936540 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:52.937112 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:55.436400 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:57.936329 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:14:59.936423 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:01.936918 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:03.962190 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:06.436269 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:08.436478 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:10.939425 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:13.438546 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:15.936461 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:17.937109 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:20.436649 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:22.936257 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:24.936644 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:26.936928 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:29.436313 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:31.936767 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:34.435797 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:36.436776 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:38.437350 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:40.936694 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:43.437290 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:45.936133 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:48.436279 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:50.436488 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:52.937555 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:55.436217 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:15:57.936407 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:00.446045 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:02.936059 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:04.936115 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:06.936974 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:09.436519 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:11.936908 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:14.436578 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:16.436983 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:18.936240 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:20.936406 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:22.936754 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:25.437681 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:27.935599 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:29.937100 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:31.948008 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:34.436184 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:36.436756 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:38.936032 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:40.936524 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:43.436272 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:45.437061 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:47.938001 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:49.953799 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:52.436865 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:54.990857 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:57.436603 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:16:59.936464 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:17:01.945370 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:17:04.444285 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:17:06.951299 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:17:09.437288 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:17:09.437357 792122 pod_ready.go:82] duration metric: took 4m0.006581256s for pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace to be "Ready" ...
E0210 11:17:09.437376 792122 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0210 11:17:09.437385 792122 pod_ready.go:39] duration metric: took 5m26.939532937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0210 11:17:09.437403 792122 api_server.go:52] waiting for apiserver process to appear ...
I0210 11:17:09.437440 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0210 11:17:09.437540 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0210 11:17:09.487231 792122 cri.go:89] found id: "ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
I0210 11:17:09.487255 792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:17:09.487276 792122 cri.go:89] found id: ""
I0210 11:17:09.487283 792122 logs.go:282] 2 containers: [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01]
I0210 11:17:09.487345 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.491577 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.495484 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0210 11:17:09.495557 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0210 11:17:09.544535 792122 cri.go:89] found id: "4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
I0210 11:17:09.544558 792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:17:09.544563 792122 cri.go:89] found id: ""
I0210 11:17:09.544570 792122 logs.go:282] 2 containers: [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba]
I0210 11:17:09.544628 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.548930 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.552295 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0210 11:17:09.552365 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0210 11:17:09.604781 792122 cri.go:89] found id: "23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
I0210 11:17:09.604800 792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:17:09.604806 792122 cri.go:89] found id: ""
I0210 11:17:09.604812 792122 logs.go:282] 2 containers: [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d]
I0210 11:17:09.604866 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.608845 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.613042 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0210 11:17:09.613164 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0210 11:17:09.658259 792122 cri.go:89] found id: "2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
I0210 11:17:09.658335 792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:17:09.658389 792122 cri.go:89] found id: ""
I0210 11:17:09.658414 792122 logs.go:282] 2 containers: [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd]
I0210 11:17:09.658491 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.662928 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.666904 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0210 11:17:09.667021 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0210 11:17:09.714379 792122 cri.go:89] found id: "2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
I0210 11:17:09.714455 792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:17:09.714475 792122 cri.go:89] found id: ""
I0210 11:17:09.714502 792122 logs.go:282] 2 containers: [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1]
I0210 11:17:09.714574 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.718758 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.722517 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0210 11:17:09.722636 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0210 11:17:09.771474 792122 cri.go:89] found id: "aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
I0210 11:17:09.771545 792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:17:09.771565 792122 cri.go:89] found id: ""
I0210 11:17:09.771588 792122 logs.go:282] 2 containers: [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d]
I0210 11:17:09.771661 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.775353 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.779153 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0210 11:17:09.779273 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0210 11:17:09.825744 792122 cri.go:89] found id: "63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
I0210 11:17:09.825818 792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:17:09.825838 792122 cri.go:89] found id: ""
I0210 11:17:09.825861 792122 logs.go:282] 2 containers: [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4]
I0210 11:17:09.825933 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.829905 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.833685 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0210 11:17:09.833803 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0210 11:17:09.880184 792122 cri.go:89] found id: "b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
I0210 11:17:09.880260 792122 cri.go:89] found id: "221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
I0210 11:17:09.880279 792122 cri.go:89] found id: ""
I0210 11:17:09.880303 792122 logs.go:282] 2 containers: [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525]
I0210 11:17:09.880385 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.884665 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.888489 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0210 11:17:09.888609 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0210 11:17:09.933140 792122 cri.go:89] found id: "6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
I0210 11:17:09.933213 792122 cri.go:89] found id: ""
I0210 11:17:09.933235 792122 logs.go:282] 1 containers: [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7]
I0210 11:17:09.933325 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.937792 792122 logs.go:123] Gathering logs for dmesg ...
I0210 11:17:09.937862 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0210 11:17:09.973568 792122 logs.go:123] Gathering logs for kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] ...
I0210 11:17:09.973650 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:17:10.088454 792122 logs.go:123] Gathering logs for kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] ...
I0210 11:17:10.088500 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
I0210 11:17:10.153844 792122 logs.go:123] Gathering logs for kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] ...
I0210 11:17:10.153874 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
I0210 11:17:10.260745 792122 logs.go:123] Gathering logs for etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] ...
I0210 11:17:10.260782 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
I0210 11:17:10.321419 792122 logs.go:123] Gathering logs for etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] ...
I0210 11:17:10.321451 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:17:10.397177 792122 logs.go:123] Gathering logs for coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] ...
I0210 11:17:10.397207 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
I0210 11:17:10.462194 792122 logs.go:123] Gathering logs for kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] ...
I0210 11:17:10.462224 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
I0210 11:17:10.528776 792122 logs.go:123] Gathering logs for kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] ...
I0210 11:17:10.528803 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:17:10.574450 792122 logs.go:123] Gathering logs for kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] ...
I0210 11:17:10.574521 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:17:10.652275 792122 logs.go:123] Gathering logs for kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] ...
I0210 11:17:10.652360 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
I0210 11:17:10.716278 792122 logs.go:123] Gathering logs for storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] ...
I0210 11:17:10.716454 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
I0210 11:17:10.765251 792122 logs.go:123] Gathering logs for kubelet ...
I0210 11:17:10.765318 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0210 11:17:10.826952 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.495697 665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.827210 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496065 665 reflector.go:138] object-"kube-system"/"coredns-token-7cchl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-7cchl" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.827464 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496388 665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r7rrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r7rrz" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.827695 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496738 665 reflector.go:138] object-"default"/"default-token-q8wzb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-q8wzb" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.827929 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.500993 665 reflector.go:138] object-"kube-system"/"kindnet-token-h7brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h7brt" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.828154 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501261 665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.828396 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501486 665 reflector.go:138] object-"kube-system"/"metrics-server-token-pddsx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-pddsx" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.828635 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501700 665 reflector.go:138] object-"kube-system"/"kube-proxy-token-92pf5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-92pf5" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.835472 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:43 old-k8s-version-705847 kubelet[665]: E0210 11:11:43.988520 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.835682 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:44 old-k8s-version-705847 kubelet[665]: E0210 11:11:44.494625 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.839280 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:57 old-k8s-version-705847 kubelet[665]: E0210 11:11:57.176598 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.841560 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:06 old-k8s-version-705847 kubelet[665]: E0210 11:12:06.587650 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.841923 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:07 old-k8s-version-705847 kubelet[665]: E0210 11:12:07.588161 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.842131 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:12 old-k8s-version-705847 kubelet[665]: E0210 11:12:12.166247 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.842823 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:13 old-k8s-version-705847 kubelet[665]: E0210 11:12:13.359119 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.843281 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:15 old-k8s-version-705847 kubelet[665]: E0210 11:12:15.615500 665 pod_workers.go:191] Error syncing pod 9fb88c78-7e13-4c39-b861-6a75febd2f29 ("storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"
W0210 11:17:10.844228 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:24 old-k8s-version-705847 kubelet[665]: E0210 11:12:24.650563 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.846762 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:26 old-k8s-version-705847 kubelet[665]: E0210 11:12:26.179066 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.847253 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:33 old-k8s-version-705847 kubelet[665]: E0210 11:12:33.359712 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.847462 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:38 old-k8s-version-705847 kubelet[665]: E0210 11:12:38.166028 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.847813 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:44 old-k8s-version-705847 kubelet[665]: E0210 11:12:44.165493 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.848022 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:52 old-k8s-version-705847 kubelet[665]: E0210 11:12:52.166523 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.848632 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:58 old-k8s-version-705847 kubelet[665]: E0210 11:12:58.763561 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.848983 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:03 old-k8s-version-705847 kubelet[665]: E0210 11:13:03.358918 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.849189 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:06 old-k8s-version-705847 kubelet[665]: E0210 11:13:06.166020 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.849553 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:15 old-k8s-version-705847 kubelet[665]: E0210 11:13:15.165381 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.852128 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:20 old-k8s-version-705847 kubelet[665]: E0210 11:13:20.182857 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.852482 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:27 old-k8s-version-705847 kubelet[665]: E0210 11:13:27.165926 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.852710 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:34 old-k8s-version-705847 kubelet[665]: E0210 11:13:34.166696 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.853073 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:38 old-k8s-version-705847 kubelet[665]: E0210 11:13:38.165396 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.853287 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:46 old-k8s-version-705847 kubelet[665]: E0210 11:13:46.167918 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.853938 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:51 old-k8s-version-705847 kubelet[665]: E0210 11:13:51.896354 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.854291 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:53 old-k8s-version-705847 kubelet[665]: E0210 11:13:53.359574 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.854509 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:00 old-k8s-version-705847 kubelet[665]: E0210 11:14:00.171443 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.854876 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:05 old-k8s-version-705847 kubelet[665]: E0210 11:14:05.165923 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.855096 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:12 old-k8s-version-705847 kubelet[665]: E0210 11:14:12.166921 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.855567 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:17 old-k8s-version-705847 kubelet[665]: E0210 11:14:17.165864 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.855778 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:26 old-k8s-version-705847 kubelet[665]: E0210 11:14:26.165733 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.856158 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:31 old-k8s-version-705847 kubelet[665]: E0210 11:14:31.165921 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.856381 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:40 old-k8s-version-705847 kubelet[665]: E0210 11:14:40.166598 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.856738 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:46 old-k8s-version-705847 kubelet[665]: E0210 11:14:46.165929 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.859236 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:55 old-k8s-version-705847 kubelet[665]: E0210 11:14:55.174499 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.859591 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:58 old-k8s-version-705847 kubelet[665]: E0210 11:14:58.165404 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.859943 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:09 old-k8s-version-705847 kubelet[665]: E0210 11:15:09.165442 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.864467 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:10 old-k8s-version-705847 kubelet[665]: E0210 11:15:10.167175 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.865098 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:21 old-k8s-version-705847 kubelet[665]: E0210 11:15:21.169488 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.865311 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:22 old-k8s-version-705847 kubelet[665]: E0210 11:15:22.173180 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.865688 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:23 old-k8s-version-705847 kubelet[665]: E0210 11:15:23.359507 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.865901 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:35 old-k8s-version-705847 kubelet[665]: E0210 11:15:35.165986 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.866255 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:36 old-k8s-version-705847 kubelet[665]: E0210 11:15:36.165557 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.866470 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.866821 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.867030 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.867413 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.867627 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.868007 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.868230 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.868594 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.868802 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.869151 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.869370 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.869737 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.869948 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.870300 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
I0210 11:17:10.870323 792122 logs.go:123] Gathering logs for describe nodes ...
I0210 11:17:10.870349 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0210 11:17:11.085761 792122 logs.go:123] Gathering logs for coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] ...
I0210 11:17:11.085801 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:17:11.150903 792122 logs.go:123] Gathering logs for kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] ...
I0210 11:17:11.150935 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:17:11.209154 792122 logs.go:123] Gathering logs for storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] ...
I0210 11:17:11.209226 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
I0210 11:17:11.266387 792122 logs.go:123] Gathering logs for container status ...
I0210 11:17:11.266414 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0210 11:17:11.330870 792122 logs.go:123] Gathering logs for kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] ...
I0210 11:17:11.330958 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
I0210 11:17:11.456996 792122 logs.go:123] Gathering logs for kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] ...
I0210 11:17:11.457085 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:17:11.505129 792122 logs.go:123] Gathering logs for kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] ...
I0210 11:17:11.505201 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
I0210 11:17:11.562557 792122 logs.go:123] Gathering logs for containerd ...
I0210 11:17:11.562640 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0210 11:17:11.629917 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:11.629991 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0210 11:17:11.630090 792122 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0210 11:17:11.630134 792122 out.go:270] Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:11.630302 792122 out.go:270] Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:11.630335 792122 out.go:270] Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:11.630379 792122 out.go:270] Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:11.630431 792122 out.go:270] Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
I0210 11:17:11.630463 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:11.630494 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:17:21.633459 792122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0210 11:17:21.646627 792122 api_server.go:72] duration metric: took 5m57.197162359s to wait for apiserver process to appear ...
I0210 11:17:21.646652 792122 api_server.go:88] waiting for apiserver healthz status ...
I0210 11:17:21.646689 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0210 11:17:21.646747 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0210 11:17:21.702943 792122 cri.go:89] found id: "ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
I0210 11:17:21.702968 792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:17:21.702974 792122 cri.go:89] found id: ""
I0210 11:17:21.702981 792122 logs.go:282] 2 containers: [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01]
I0210 11:17:21.703043 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.706808 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.711614 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0210 11:17:21.711686 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0210 11:17:21.769142 792122 cri.go:89] found id: "4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
I0210 11:17:21.769166 792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:17:21.769171 792122 cri.go:89] found id: ""
I0210 11:17:21.769178 792122 logs.go:282] 2 containers: [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba]
I0210 11:17:21.769231 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.772814 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.776371 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0210 11:17:21.776467 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0210 11:17:21.835068 792122 cri.go:89] found id: "23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
I0210 11:17:21.835099 792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:17:21.835105 792122 cri.go:89] found id: ""
I0210 11:17:21.835112 792122 logs.go:282] 2 containers: [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d]
I0210 11:17:21.835205 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.839601 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.843809 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0210 11:17:21.843906 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0210 11:17:21.894020 792122 cri.go:89] found id: "2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
I0210 11:17:21.894042 792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:17:21.894047 792122 cri.go:89] found id: ""
I0210 11:17:21.894054 792122 logs.go:282] 2 containers: [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd]
I0210 11:17:21.894151 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.898071 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.902515 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0210 11:17:21.902616 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0210 11:17:21.980105 792122 cri.go:89] found id: "2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
I0210 11:17:21.980138 792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:17:21.980144 792122 cri.go:89] found id: ""
I0210 11:17:21.980151 792122 logs.go:282] 2 containers: [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1]
I0210 11:17:21.980235 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.984322 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.987666 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0210 11:17:21.987780 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0210 11:17:22.059620 792122 cri.go:89] found id: "aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
I0210 11:17:22.059644 792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:17:22.059649 792122 cri.go:89] found id: ""
I0210 11:17:22.059658 792122 logs.go:282] 2 containers: [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d]
I0210 11:17:22.059744 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.063872 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.067934 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0210 11:17:22.068028 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0210 11:17:22.120294 792122 cri.go:89] found id: "63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
I0210 11:17:22.120314 792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:17:22.120319 792122 cri.go:89] found id: ""
I0210 11:17:22.120326 792122 logs.go:282] 2 containers: [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4]
I0210 11:17:22.120379 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.124012 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.133616 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0210 11:17:22.133685 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0210 11:17:22.193873 792122 cri.go:89] found id: "b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
I0210 11:17:22.193892 792122 cri.go:89] found id: "221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
I0210 11:17:22.193897 792122 cri.go:89] found id: ""
I0210 11:17:22.193904 792122 logs.go:282] 2 containers: [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525]
I0210 11:17:22.193959 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.197703 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.201260 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0210 11:17:22.201380 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0210 11:17:22.252446 792122 cri.go:89] found id: "6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
I0210 11:17:22.252510 792122 cri.go:89] found id: ""
I0210 11:17:22.252533 792122 logs.go:282] 1 containers: [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7]
I0210 11:17:22.252606 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.256456 792122 logs.go:123] Gathering logs for kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] ...
I0210 11:17:22.256522 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:17:22.319127 792122 logs.go:123] Gathering logs for kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] ...
I0210 11:17:22.319197 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
I0210 11:17:22.371929 792122 logs.go:123] Gathering logs for kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] ...
I0210 11:17:22.371996 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:17:22.419946 792122 logs.go:123] Gathering logs for storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] ...
I0210 11:17:22.420016 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
I0210 11:17:22.500193 792122 logs.go:123] Gathering logs for describe nodes ...
I0210 11:17:22.500219 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0210 11:17:22.689017 792122 logs.go:123] Gathering logs for coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] ...
I0210 11:17:22.689049 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
I0210 11:17:22.771016 792122 logs.go:123] Gathering logs for kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] ...
I0210 11:17:22.771047 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
I0210 11:17:22.833433 792122 logs.go:123] Gathering logs for kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] ...
I0210 11:17:22.833464 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
I0210 11:17:22.899680 792122 logs.go:123] Gathering logs for containerd ...
I0210 11:17:22.899757 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0210 11:17:22.995820 792122 logs.go:123] Gathering logs for dmesg ...
I0210 11:17:22.995915 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0210 11:17:23.022911 792122 logs.go:123] Gathering logs for etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] ...
I0210 11:17:23.022939 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
I0210 11:17:23.088083 792122 logs.go:123] Gathering logs for coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] ...
I0210 11:17:23.088257 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:17:23.173762 792122 logs.go:123] Gathering logs for container status ...
I0210 11:17:23.173836 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0210 11:17:23.230605 792122 logs.go:123] Gathering logs for kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] ...
I0210 11:17:23.230682 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
I0210 11:17:23.306650 792122 logs.go:123] Gathering logs for kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] ...
I0210 11:17:23.306724 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:17:23.388460 792122 logs.go:123] Gathering logs for kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] ...
I0210 11:17:23.388501 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
I0210 11:17:23.442850 792122 logs.go:123] Gathering logs for kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] ...
I0210 11:17:23.442879 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:17:23.569314 792122 logs.go:123] Gathering logs for kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] ...
I0210 11:17:23.569354 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:17:23.623310 792122 logs.go:123] Gathering logs for storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] ...
I0210 11:17:23.623338 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
I0210 11:17:23.669314 792122 logs.go:123] Gathering logs for kubelet ...
I0210 11:17:23.669343 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0210 11:17:23.735791 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.495697 665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.736086 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496065 665 reflector.go:138] object-"kube-system"/"coredns-token-7cchl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-7cchl" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.736428 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496388 665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r7rrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r7rrz" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.736690 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496738 665 reflector.go:138] object-"default"/"default-token-q8wzb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-q8wzb" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.736907 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.500993 665 reflector.go:138] object-"kube-system"/"kindnet-token-h7brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h7brt" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.737110 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501261 665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.737331 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501486 665 reflector.go:138] object-"kube-system"/"metrics-server-token-pddsx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-pddsx" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.737557 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501700 665 reflector.go:138] object-"kube-system"/"kube-proxy-token-92pf5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-92pf5" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.744448 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:43 old-k8s-version-705847 kubelet[665]: E0210 11:11:43.988520 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.744641 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:44 old-k8s-version-705847 kubelet[665]: E0210 11:11:44.494625 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.748240 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:57 old-k8s-version-705847 kubelet[665]: E0210 11:11:57.176598 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.750410 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:06 old-k8s-version-705847 kubelet[665]: E0210 11:12:06.587650 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.750747 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:07 old-k8s-version-705847 kubelet[665]: E0210 11:12:07.588161 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.750932 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:12 old-k8s-version-705847 kubelet[665]: E0210 11:12:12.166247 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.751597 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:13 old-k8s-version-705847 kubelet[665]: E0210 11:12:13.359119 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.752034 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:15 old-k8s-version-705847 kubelet[665]: E0210 11:12:15.615500 665 pod_workers.go:191] Error syncing pod 9fb88c78-7e13-4c39-b861-6a75febd2f29 ("storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"
W0210 11:17:23.752959 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:24 old-k8s-version-705847 kubelet[665]: E0210 11:12:24.650563 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.755482 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:26 old-k8s-version-705847 kubelet[665]: E0210 11:12:26.179066 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.755947 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:33 old-k8s-version-705847 kubelet[665]: E0210 11:12:33.359712 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.756133 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:38 old-k8s-version-705847 kubelet[665]: E0210 11:12:38.166028 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.756462 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:44 old-k8s-version-705847 kubelet[665]: E0210 11:12:44.165493 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.756668 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:52 old-k8s-version-705847 kubelet[665]: E0210 11:12:52.166523 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.757257 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:58 old-k8s-version-705847 kubelet[665]: E0210 11:12:58.763561 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.757662 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:03 old-k8s-version-705847 kubelet[665]: E0210 11:13:03.358918 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.757866 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:06 old-k8s-version-705847 kubelet[665]: E0210 11:13:06.166020 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.758208 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:15 old-k8s-version-705847 kubelet[665]: E0210 11:13:15.165381 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.760718 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:20 old-k8s-version-705847 kubelet[665]: E0210 11:13:20.182857 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.761078 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:27 old-k8s-version-705847 kubelet[665]: E0210 11:13:27.165926 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.761277 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:34 old-k8s-version-705847 kubelet[665]: E0210 11:13:34.166696 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.761645 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:38 old-k8s-version-705847 kubelet[665]: E0210 11:13:38.165396 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.761831 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:46 old-k8s-version-705847 kubelet[665]: E0210 11:13:46.167918 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.762418 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:51 old-k8s-version-705847 kubelet[665]: E0210 11:13:51.896354 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.762750 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:53 old-k8s-version-705847 kubelet[665]: E0210 11:13:53.359574 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.762936 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:00 old-k8s-version-705847 kubelet[665]: E0210 11:14:00.171443 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.763264 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:05 old-k8s-version-705847 kubelet[665]: E0210 11:14:05.165923 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.763453 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:12 old-k8s-version-705847 kubelet[665]: E0210 11:14:12.166921 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.763802 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:17 old-k8s-version-705847 kubelet[665]: E0210 11:14:17.165864 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.763989 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:26 old-k8s-version-705847 kubelet[665]: E0210 11:14:26.165733 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.764385 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:31 old-k8s-version-705847 kubelet[665]: E0210 11:14:31.165921 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.764574 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:40 old-k8s-version-705847 kubelet[665]: E0210 11:14:40.166598 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.764916 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:46 old-k8s-version-705847 kubelet[665]: E0210 11:14:46.165929 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.767429 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:55 old-k8s-version-705847 kubelet[665]: E0210 11:14:55.174499 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.767825 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:58 old-k8s-version-705847 kubelet[665]: E0210 11:14:58.165404 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.768160 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:09 old-k8s-version-705847 kubelet[665]: E0210 11:15:09.165442 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.768346 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:10 old-k8s-version-705847 kubelet[665]: E0210 11:15:10.167175 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.768960 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:21 old-k8s-version-705847 kubelet[665]: E0210 11:15:21.169488 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.769151 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:22 old-k8s-version-705847 kubelet[665]: E0210 11:15:22.173180 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.769564 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:23 old-k8s-version-705847 kubelet[665]: E0210 11:15:23.359507 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.769768 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:35 old-k8s-version-705847 kubelet[665]: E0210 11:15:35.165986 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.770114 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:36 old-k8s-version-705847 kubelet[665]: E0210 11:15:36.165557 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.770306 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.770642 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.770826 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.771154 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.771340 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.771666 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.771864 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.772192 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.772377 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.772703 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.772887 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.773213 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.773398 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.773731 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.774137 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0210 11:17:23.774168 792122 logs.go:123] Gathering logs for etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] ...
I0210 11:17:23.774184 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:17:23.845922 792122 logs.go:123] Gathering logs for kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] ...
I0210 11:17:23.845950 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
I0210 11:17:23.939309 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:23.939399 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0210 11:17:23.939494 792122 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0210 11:17:23.939681 792122 out.go:270] Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.939739 792122 out.go:270] Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.939779 792122 out.go:270] Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.939835 792122 out.go:270] Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.939870 792122 out.go:270] Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0210 11:17:23.939920 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:23.939941 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:17:33.941594 792122 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0210 11:17:33.966671 792122 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0210 11:17:33.970082 792122 out.go:201]
W0210 11:17:33.973071 792122 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0210 11:17:33.973117 792122 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0210 11:17:33.973146 792122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0210 11:17:33.973158 792122 out.go:270] *
*
W0210 11:17:33.974109 792122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0210 11:17:33.977004 792122 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-705847 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-705847
helpers_test.go:235: (dbg) docker inspect old-k8s-version-705847:
-- stdout --
[
{
"Id": "a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5",
"Created": "2025-02-10T11:08:15.654461625Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 792321,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-02-10T11:11:16.381808219Z",
"FinishedAt": "2025-02-10T11:11:15.181356464Z"
},
"Image": "sha256:0434cf58b6dbace281e5de753aa4b2e3fe33dc9a3be53021531403743c3f155a",
"ResolvConfPath": "/var/lib/docker/containers/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5/hostname",
"HostsPath": "/var/lib/docker/containers/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5/hosts",
"LogPath": "/var/lib/docker/containers/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5/a745477e05fbd4f139dc8a7b803bcbc4b9880eb3ef07588863bad9143f8d60f5-json.log",
"Name": "/old-k8s-version-705847",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-705847:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-705847",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/ab002ae6dcbc76d54756237b1d8f947fd6d10a3bdae1ea5ca0aa20c6446c2c67-init/diff:/var/lib/docker/overlay2/26239c014af6c1ba34d676e86726c37031bac25f65804c44ae4f8df935bea840/diff",
"MergedDir": "/var/lib/docker/overlay2/ab002ae6dcbc76d54756237b1d8f947fd6d10a3bdae1ea5ca0aa20c6446c2c67/merged",
"UpperDir": "/var/lib/docker/overlay2/ab002ae6dcbc76d54756237b1d8f947fd6d10a3bdae1ea5ca0aa20c6446c2c67/diff",
"WorkDir": "/var/lib/docker/overlay2/ab002ae6dcbc76d54756237b1d8f947fd6d10a3bdae1ea5ca0aa20c6446c2c67/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-705847",
"Source": "/var/lib/docker/volumes/old-k8s-version-705847/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-705847",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-705847",
"name.minikube.sigs.k8s.io": "old-k8s-version-705847",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5d29c017b5e755a03ffcce91a174ec923f04dc63dd87e46edaf834f84250587b",
"SandboxKey": "/var/run/docker/netns/5d29c017b5e7",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33798"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33799"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33802"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33800"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33801"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-705847": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "fc44ac08ef1f81a1cabfe5ec2acc66b7f9febc09e6d34d30523f23893af91f16",
"EndpointID": "02bcf43005e65955eab1cc5f9bdb039c8ddafa874db2d40e40f1941e004fe9a9",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-705847",
"a745477e05fb"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-705847 -n old-k8s-version-705847
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-705847 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-705847 logs -n 25: (3.159324309s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-369393 | cert-expiration-369393 | jenkins | v1.35.0 | 10 Feb 25 11:06 UTC | 10 Feb 25 11:07 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-962978 | force-systemd-env-962978 | jenkins | v1.35.0 | 10 Feb 25 11:07 UTC | 10 Feb 25 11:07 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-962978 | force-systemd-env-962978 | jenkins | v1.35.0 | 10 Feb 25 11:07 UTC | 10 Feb 25 11:07 UTC |
| start | -p cert-options-679762 | cert-options-679762 | jenkins | v1.35.0 | 10 Feb 25 11:07 UTC | 10 Feb 25 11:08 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-679762 ssh | cert-options-679762 | jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-679762 -- sudo | cert-options-679762 | jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-679762 | cert-options-679762 | jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:08 UTC |
| start | -p old-k8s-version-705847 | old-k8s-version-705847 | jenkins | v1.35.0 | 10 Feb 25 11:08 UTC | 10 Feb 25 11:10 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-369393 | cert-expiration-369393 | jenkins | v1.35.0 | 10 Feb 25 11:10 UTC | 10 Feb 25 11:10 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-369393 | cert-expiration-369393 | jenkins | v1.35.0 | 10 Feb 25 11:10 UTC | 10 Feb 25 11:10 UTC |
| start | -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:10 UTC | 10 Feb 25 11:11 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| addons | enable metrics-server -p old-k8s-version-705847 | old-k8s-version-705847 | jenkins | v1.35.0 | 10 Feb 25 11:11 UTC | 10 Feb 25 11:11 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-705847 | old-k8s-version-705847 | jenkins | v1.35.0 | 10 Feb 25 11:11 UTC | 10 Feb 25 11:11 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-705847 | old-k8s-version-705847 | jenkins | v1.35.0 | 10 Feb 25 11:11 UTC | 10 Feb 25 11:11 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-705847 | old-k8s-version-705847 | jenkins | v1.35.0 | 10 Feb 25 11:11 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:12 UTC | 10 Feb 25 11:12 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:12 UTC | 10 Feb 25 11:12 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:12 UTC | 10 Feb 25 11:12 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:12 UTC | 10 Feb 25 11:16 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
| image | no-preload-861376 image list | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
| delete | -p no-preload-861376 | no-preload-861376 | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | 10 Feb 25 11:17 UTC |
| start | -p embed-certs-822142 | embed-certs-822142 | jenkins | v1.35.0 | 10 Feb 25 11:17 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.32.1 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/02/10 11:17:07
Running on machine: ip-172-31-31-251
Binary: Built with gc go1.23.4 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0210 11:17:07.790759 802973 out.go:345] Setting OutFile to fd 1 ...
I0210 11:17:07.790882 802973 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:17:07.790894 802973 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:07.790900 802973 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:17:07.791160 802973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20385-576242/.minikube/bin
I0210 11:17:07.791583 802973 out.go:352] Setting JSON to false
I0210 11:17:07.792674 802973 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14373,"bootTime":1739171855,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1075-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
I0210 11:17:07.792754 802973 start.go:139] virtualization:
I0210 11:17:07.796772 802973 out.go:177] * [embed-certs-822142] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0210 11:17:07.801121 802973 out.go:177] - MINIKUBE_LOCATION=20385
I0210 11:17:07.801324 802973 notify.go:220] Checking for updates...
I0210 11:17:07.807471 802973 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0210 11:17:07.810736 802973 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20385-576242/kubeconfig
I0210 11:17:07.813830 802973 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20385-576242/.minikube
I0210 11:17:07.816854 802973 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0210 11:17:07.819810 802973 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0210 11:17:07.823313 802973 config.go:182] Loaded profile config "old-k8s-version-705847": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0210 11:17:07.823443 802973 driver.go:394] Setting default libvirt URI to qemu:///system
I0210 11:17:07.854450 802973 docker.go:123] docker version: linux-27.5.1:Docker Engine - Community
I0210 11:17:07.854715 802973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0210 11:17:07.913226 802973 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-10 11:17:07.90331121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0210 11:17:07.913348 802973 docker.go:318] overlay module found
I0210 11:17:07.916449 802973 out.go:177] * Using the docker driver based on user configuration
I0210 11:17:07.919271 802973 start.go:297] selected driver: docker
I0210 11:17:07.919290 802973 start.go:901] validating driver "docker" against <nil>
I0210 11:17:07.919304 802973 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0210 11:17:07.920047 802973 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0210 11:17:08.006093 802973 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:53 SystemTime:2025-02-10 11:17:07.987821154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1075-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb Expected:bcc810d6b9066471b0b6fa75f557a15a1cbf31bb} RuncCommit:{ID:v1.2.4-0-g6c52b3f Expected:v1.2.4-0-g6c52b3f} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.20.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.32.4]] Warnings:<nil>}}
I0210 11:17:08.006379 802973 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0210 11:17:08.006623 802973 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0210 11:17:08.009615 802973 out.go:177] * Using Docker driver with root privileges
I0210 11:17:08.012671 802973 cni.go:84] Creating CNI manager for ""
I0210 11:17:08.013196 802973 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0210 11:17:08.013236 802973 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0210 11:17:08.013378 802973 start.go:340] cluster config:
{Name:embed-certs-822142 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
I0210 11:17:08.016735 802973 out.go:177] * Starting "embed-certs-822142" primary control-plane node in "embed-certs-822142" cluster
I0210 11:17:08.019640 802973 cache.go:121] Beginning downloading kic base image for docker with containerd
I0210 11:17:08.022764 802973 out.go:177] * Pulling base image v0.0.46 ...
I0210 11:17:08.025665 802973 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0210 11:17:08.025756 802973 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4
I0210 11:17:08.025771 802973 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon
I0210 11:17:08.025789 802973 cache.go:56] Caching tarball of preloaded images
I0210 11:17:08.025905 802973 preload.go:172] Found /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0210 11:17:08.025917 802973 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on containerd
I0210 11:17:08.026046 802973 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/config.json ...
I0210 11:17:08.026102 802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/config.json: {Name:mkcf3cebecc98801d43dfd996a72ac5ae7403fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:17:08.047371 802973 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 in local docker daemon, skipping pull
I0210 11:17:08.047398 802973 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 exists in daemon, skipping load
I0210 11:17:08.047419 802973 cache.go:230] Successfully downloaded all kic artifacts
I0210 11:17:08.047453 802973 start.go:360] acquireMachinesLock for embed-certs-822142: {Name:mk8e9768e203098d1ff183e3ceae266c8926e0e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0210 11:17:08.047579 802973 start.go:364] duration metric: took 102.517µs to acquireMachinesLock for "embed-certs-822142"
I0210 11:17:08.047613 802973 start.go:93] Provisioning new machine with config: &{Name:embed-certs-822142 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0210 11:17:08.047686 802973 start.go:125] createHost starting for "" (driver="docker")
I0210 11:17:06.951299 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:17:09.437288 792122 pod_ready.go:103] pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace has status "Ready":"False"
I0210 11:17:09.437357 792122 pod_ready.go:82] duration metric: took 4m0.006581256s for pod "metrics-server-9975d5f86-nvn7z" in "kube-system" namespace to be "Ready" ...
E0210 11:17:09.437376 792122 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0210 11:17:09.437385 792122 pod_ready.go:39] duration metric: took 5m26.939532937s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0210 11:17:09.437403 792122 api_server.go:52] waiting for apiserver process to appear ...
I0210 11:17:09.437440 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0210 11:17:09.437540 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0210 11:17:09.487231 792122 cri.go:89] found id: "ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
I0210 11:17:09.487255 792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:17:09.487276 792122 cri.go:89] found id: ""
I0210 11:17:09.487283 792122 logs.go:282] 2 containers: [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01]
I0210 11:17:09.487345 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.491577 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.495484 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0210 11:17:09.495557 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0210 11:17:09.544535 792122 cri.go:89] found id: "4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
I0210 11:17:09.544558 792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:17:09.544563 792122 cri.go:89] found id: ""
I0210 11:17:09.544570 792122 logs.go:282] 2 containers: [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba]
I0210 11:17:09.544628 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.548930 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.552295 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0210 11:17:09.552365 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0210 11:17:09.604781 792122 cri.go:89] found id: "23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
I0210 11:17:09.604800 792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:17:09.604806 792122 cri.go:89] found id: ""
I0210 11:17:09.604812 792122 logs.go:282] 2 containers: [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d]
I0210 11:17:09.604866 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.608845 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.613042 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0210 11:17:09.613164 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0210 11:17:09.658259 792122 cri.go:89] found id: "2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
I0210 11:17:09.658335 792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:17:09.658389 792122 cri.go:89] found id: ""
I0210 11:17:09.658414 792122 logs.go:282] 2 containers: [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd]
I0210 11:17:09.658491 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.662928 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.666904 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0210 11:17:09.667021 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0210 11:17:09.714379 792122 cri.go:89] found id: "2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
I0210 11:17:09.714455 792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:17:09.714475 792122 cri.go:89] found id: ""
I0210 11:17:09.714502 792122 logs.go:282] 2 containers: [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1]
I0210 11:17:09.714574 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.718758 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.722517 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0210 11:17:09.722636 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0210 11:17:09.771474 792122 cri.go:89] found id: "aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
I0210 11:17:09.771545 792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:17:09.771565 792122 cri.go:89] found id: ""
I0210 11:17:09.771588 792122 logs.go:282] 2 containers: [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d]
I0210 11:17:09.771661 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.775353 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.779153 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0210 11:17:09.779273 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0210 11:17:09.825744 792122 cri.go:89] found id: "63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
I0210 11:17:09.825818 792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:17:09.825838 792122 cri.go:89] found id: ""
I0210 11:17:09.825861 792122 logs.go:282] 2 containers: [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4]
I0210 11:17:09.825933 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.829905 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.833685 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0210 11:17:09.833803 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0210 11:17:09.880184 792122 cri.go:89] found id: "b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
I0210 11:17:09.880260 792122 cri.go:89] found id: "221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
I0210 11:17:09.880279 792122 cri.go:89] found id: ""
I0210 11:17:09.880303 792122 logs.go:282] 2 containers: [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525]
I0210 11:17:09.880385 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.884665 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.888489 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0210 11:17:09.888609 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0210 11:17:09.933140 792122 cri.go:89] found id: "6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
I0210 11:17:09.933213 792122 cri.go:89] found id: ""
I0210 11:17:09.933235 792122 logs.go:282] 1 containers: [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7]
I0210 11:17:09.933325 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:09.937792 792122 logs.go:123] Gathering logs for dmesg ...
I0210 11:17:09.937862 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0210 11:17:09.973568 792122 logs.go:123] Gathering logs for kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] ...
I0210 11:17:09.973650 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:17:10.088454 792122 logs.go:123] Gathering logs for kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] ...
I0210 11:17:10.088500 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
I0210 11:17:10.153844 792122 logs.go:123] Gathering logs for kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] ...
I0210 11:17:10.153874 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
I0210 11:17:10.260745 792122 logs.go:123] Gathering logs for etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] ...
I0210 11:17:10.260782 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
I0210 11:17:10.321419 792122 logs.go:123] Gathering logs for etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] ...
I0210 11:17:10.321451 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:17:10.397177 792122 logs.go:123] Gathering logs for coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] ...
I0210 11:17:10.397207 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
I0210 11:17:10.462194 792122 logs.go:123] Gathering logs for kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] ...
I0210 11:17:10.462224 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
I0210 11:17:10.528776 792122 logs.go:123] Gathering logs for kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] ...
I0210 11:17:10.528803 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:17:10.574450 792122 logs.go:123] Gathering logs for kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] ...
I0210 11:17:10.574521 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:17:10.652275 792122 logs.go:123] Gathering logs for kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] ...
I0210 11:17:10.652360 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
I0210 11:17:10.716278 792122 logs.go:123] Gathering logs for storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] ...
I0210 11:17:10.716454 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
I0210 11:17:10.765251 792122 logs.go:123] Gathering logs for kubelet ...
I0210 11:17:10.765318 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0210 11:17:10.826952 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.495697 665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.827210 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496065 665 reflector.go:138] object-"kube-system"/"coredns-token-7cchl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-7cchl" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.827464 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496388 665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r7rrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r7rrz" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.827695 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496738 665 reflector.go:138] object-"default"/"default-token-q8wzb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-q8wzb" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.827929 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.500993 665 reflector.go:138] object-"kube-system"/"kindnet-token-h7brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h7brt" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.828154 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501261 665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.828396 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501486 665 reflector.go:138] object-"kube-system"/"metrics-server-token-pddsx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-pddsx" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.828635 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501700 665 reflector.go:138] object-"kube-system"/"kube-proxy-token-92pf5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-92pf5" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:10.835472 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:43 old-k8s-version-705847 kubelet[665]: E0210 11:11:43.988520 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.835682 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:44 old-k8s-version-705847 kubelet[665]: E0210 11:11:44.494625 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.839280 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:57 old-k8s-version-705847 kubelet[665]: E0210 11:11:57.176598 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.841560 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:06 old-k8s-version-705847 kubelet[665]: E0210 11:12:06.587650 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.841923 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:07 old-k8s-version-705847 kubelet[665]: E0210 11:12:07.588161 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.842131 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:12 old-k8s-version-705847 kubelet[665]: E0210 11:12:12.166247 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.842823 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:13 old-k8s-version-705847 kubelet[665]: E0210 11:12:13.359119 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.843281 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:15 old-k8s-version-705847 kubelet[665]: E0210 11:12:15.615500 665 pod_workers.go:191] Error syncing pod 9fb88c78-7e13-4c39-b861-6a75febd2f29 ("storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"
W0210 11:17:10.844228 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:24 old-k8s-version-705847 kubelet[665]: E0210 11:12:24.650563 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.846762 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:26 old-k8s-version-705847 kubelet[665]: E0210 11:12:26.179066 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.847253 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:33 old-k8s-version-705847 kubelet[665]: E0210 11:12:33.359712 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.847462 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:38 old-k8s-version-705847 kubelet[665]: E0210 11:12:38.166028 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.847813 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:44 old-k8s-version-705847 kubelet[665]: E0210 11:12:44.165493 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.848022 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:52 old-k8s-version-705847 kubelet[665]: E0210 11:12:52.166523 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.848632 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:58 old-k8s-version-705847 kubelet[665]: E0210 11:12:58.763561 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.848983 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:03 old-k8s-version-705847 kubelet[665]: E0210 11:13:03.358918 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.849189 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:06 old-k8s-version-705847 kubelet[665]: E0210 11:13:06.166020 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.849553 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:15 old-k8s-version-705847 kubelet[665]: E0210 11:13:15.165381 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.852128 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:20 old-k8s-version-705847 kubelet[665]: E0210 11:13:20.182857 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.852482 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:27 old-k8s-version-705847 kubelet[665]: E0210 11:13:27.165926 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.852710 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:34 old-k8s-version-705847 kubelet[665]: E0210 11:13:34.166696 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.853073 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:38 old-k8s-version-705847 kubelet[665]: E0210 11:13:38.165396 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.853287 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:46 old-k8s-version-705847 kubelet[665]: E0210 11:13:46.167918 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.853938 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:51 old-k8s-version-705847 kubelet[665]: E0210 11:13:51.896354 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.854291 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:53 old-k8s-version-705847 kubelet[665]: E0210 11:13:53.359574 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.854509 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:00 old-k8s-version-705847 kubelet[665]: E0210 11:14:00.171443 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.854876 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:05 old-k8s-version-705847 kubelet[665]: E0210 11:14:05.165923 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
I0210 11:17:08.051215 802973 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0210 11:17:08.051522 802973 start.go:159] libmachine.API.Create for "embed-certs-822142" (driver="docker")
I0210 11:17:08.051569 802973 client.go:168] LocalClient.Create starting
I0210 11:17:08.051638 802973 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem
I0210 11:17:08.051680 802973 main.go:141] libmachine: Decoding PEM data...
I0210 11:17:08.051698 802973 main.go:141] libmachine: Parsing certificate...
I0210 11:17:08.051761 802973 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem
I0210 11:17:08.051782 802973 main.go:141] libmachine: Decoding PEM data...
I0210 11:17:08.051801 802973 main.go:141] libmachine: Parsing certificate...
I0210 11:17:08.052246 802973 cli_runner.go:164] Run: docker network inspect embed-certs-822142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0210 11:17:08.070709 802973 cli_runner.go:211] docker network inspect embed-certs-822142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0210 11:17:08.070809 802973 network_create.go:284] running [docker network inspect embed-certs-822142] to gather additional debugging logs...
I0210 11:17:08.070864 802973 cli_runner.go:164] Run: docker network inspect embed-certs-822142
W0210 11:17:08.090054 802973 cli_runner.go:211] docker network inspect embed-certs-822142 returned with exit code 1
I0210 11:17:08.090104 802973 network_create.go:287] error running [docker network inspect embed-certs-822142]: docker network inspect embed-certs-822142: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-822142 not found
I0210 11:17:08.090123 802973 network_create.go:289] output of [docker network inspect embed-certs-822142]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-822142 not found
** /stderr **
I0210 11:17:08.090233 802973 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0210 11:17:08.108019 802973 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-37f7c82b9b3f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:2a:78:ce:04} reservation:<nil>}
I0210 11:17:08.108521 802973 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bd1f232eef2a IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:47:fc:c8:24} reservation:<nil>}
I0210 11:17:08.109080 802973 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e1b5d2238101 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d6:27:32:a1} reservation:<nil>}
I0210 11:17:08.109593 802973 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-fc44ac08ef1f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:8a:ec:dc:83} reservation:<nil>}
I0210 11:17:08.110202 802973 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a22750}
I0210 11:17:08.110232 802973 network_create.go:124] attempt to create docker network embed-certs-822142 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0210 11:17:08.110303 802973 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-822142 embed-certs-822142
I0210 11:17:08.200418 802973 network_create.go:108] docker network embed-certs-822142 192.168.85.0/24 created
I0210 11:17:08.200451 802973 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-822142" container
I0210 11:17:08.200524 802973 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0210 11:17:08.217784 802973 cli_runner.go:164] Run: docker volume create embed-certs-822142 --label name.minikube.sigs.k8s.io=embed-certs-822142 --label created_by.minikube.sigs.k8s.io=true
I0210 11:17:08.237235 802973 oci.go:103] Successfully created a docker volume embed-certs-822142
I0210 11:17:08.237365 802973 cli_runner.go:164] Run: docker run --rm --name embed-certs-822142-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-822142 --entrypoint /usr/bin/test -v embed-certs-822142:/var gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -d /var/lib
I0210 11:17:08.913221 802973 oci.go:107] Successfully prepared a docker volume embed-certs-822142
I0210 11:17:08.913273 802973 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0210 11:17:08.913294 802973 kic.go:194] Starting extracting preloaded images to volume ...
I0210 11:17:08.913371 802973 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-822142:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir
W0210 11:17:10.855096 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:12 old-k8s-version-705847 kubelet[665]: E0210 11:14:12.166921 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.855567 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:17 old-k8s-version-705847 kubelet[665]: E0210 11:14:17.165864 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.855778 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:26 old-k8s-version-705847 kubelet[665]: E0210 11:14:26.165733 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.856158 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:31 old-k8s-version-705847 kubelet[665]: E0210 11:14:31.165921 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.856381 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:40 old-k8s-version-705847 kubelet[665]: E0210 11:14:40.166598 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.856738 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:46 old-k8s-version-705847 kubelet[665]: E0210 11:14:46.165929 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.859236 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:55 old-k8s-version-705847 kubelet[665]: E0210 11:14:55.174499 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:10.859591 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:58 old-k8s-version-705847 kubelet[665]: E0210 11:14:58.165404 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.859943 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:09 old-k8s-version-705847 kubelet[665]: E0210 11:15:09.165442 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.864467 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:10 old-k8s-version-705847 kubelet[665]: E0210 11:15:10.167175 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.865098 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:21 old-k8s-version-705847 kubelet[665]: E0210 11:15:21.169488 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.865311 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:22 old-k8s-version-705847 kubelet[665]: E0210 11:15:22.173180 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.865688 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:23 old-k8s-version-705847 kubelet[665]: E0210 11:15:23.359507 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.865901 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:35 old-k8s-version-705847 kubelet[665]: E0210 11:15:35.165986 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.866255 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:36 old-k8s-version-705847 kubelet[665]: E0210 11:15:36.165557 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.866470 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.866821 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.867030 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.867413 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.867627 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.868007 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.868230 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.868594 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.868802 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.869151 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.869370 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.869737 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:10.869948 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:10.870300 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
I0210 11:17:10.870323 792122 logs.go:123] Gathering logs for describe nodes ...
I0210 11:17:10.870349 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0210 11:17:11.085761 792122 logs.go:123] Gathering logs for coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] ...
I0210 11:17:11.085801 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:17:11.150903 792122 logs.go:123] Gathering logs for kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] ...
I0210 11:17:11.150935 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:17:11.209154 792122 logs.go:123] Gathering logs for storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] ...
I0210 11:17:11.209226 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
I0210 11:17:11.266387 792122 logs.go:123] Gathering logs for container status ...
I0210 11:17:11.266414 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0210 11:17:11.330870 792122 logs.go:123] Gathering logs for kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] ...
I0210 11:17:11.330958 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
I0210 11:17:11.456996 792122 logs.go:123] Gathering logs for kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] ...
I0210 11:17:11.457085 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:17:11.505129 792122 logs.go:123] Gathering logs for kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] ...
I0210 11:17:11.505201 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
I0210 11:17:11.562557 792122 logs.go:123] Gathering logs for containerd ...
I0210 11:17:11.562640 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0210 11:17:11.629917 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:11.629991 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0210 11:17:11.630090 792122 out.go:270] X Problems detected in kubelet:
W0210 11:17:11.630134 792122 out.go:270] Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:11.630302 792122 out.go:270] Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:11.630335 792122 out.go:270] Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:11.630379 792122 out.go:270] Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:11.630431 792122 out.go:270] Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
I0210 11:17:11.630463 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:11.630494 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:17:14.537459 802973 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20385-576242/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-822142:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 -I lz4 -xf /preloaded.tar -C /extractDir: (5.624049494s)
I0210 11:17:14.537572 802973 kic.go:203] duration metric: took 5.624205943s to extract preloaded images to volume ...
W0210 11:17:14.537710 802973 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0210 11:17:14.537830 802973 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0210 11:17:14.587765 802973 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-822142 --name embed-certs-822142 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-822142 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-822142 --network embed-certs-822142 --ip 192.168.85.2 --volume embed-certs-822142:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279
I0210 11:17:14.948338 802973 cli_runner.go:164] Run: docker container inspect embed-certs-822142 --format={{.State.Running}}
I0210 11:17:14.969176 802973 cli_runner.go:164] Run: docker container inspect embed-certs-822142 --format={{.State.Status}}
I0210 11:17:14.989868 802973 cli_runner.go:164] Run: docker exec embed-certs-822142 stat /var/lib/dpkg/alternatives/iptables
I0210 11:17:15.056747 802973 oci.go:144] the created container "embed-certs-822142" has a running status.
I0210 11:17:15.056788 802973 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa...
I0210 11:17:15.243142 802973 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0210 11:17:15.273892 802973 cli_runner.go:164] Run: docker container inspect embed-certs-822142 --format={{.State.Status}}
I0210 11:17:15.297399 802973 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0210 11:17:15.297422 802973 kic_runner.go:114] Args: [docker exec --privileged embed-certs-822142 chown docker:docker /home/docker/.ssh/authorized_keys]
I0210 11:17:15.366262 802973 cli_runner.go:164] Run: docker container inspect embed-certs-822142 --format={{.State.Status}}
I0210 11:17:15.391371 802973 machine.go:93] provisionDockerMachine start ...
I0210 11:17:15.391461 802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
I0210 11:17:15.421342 802973 main.go:141] libmachine: Using SSH client type: native
I0210 11:17:15.421665 802973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil> [] 0s} 127.0.0.1 33808 <nil> <nil>}
I0210 11:17:15.421686 802973 main.go:141] libmachine: About to run SSH command:
hostname
I0210 11:17:15.429700 802973 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0210 11:17:18.556944 802973 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-822142
I0210 11:17:18.556969 802973 ubuntu.go:169] provisioning hostname "embed-certs-822142"
I0210 11:17:18.557049 802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
I0210 11:17:18.574796 802973 main.go:141] libmachine: Using SSH client type: native
I0210 11:17:18.576887 802973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil> [] 0s} 127.0.0.1 33808 <nil> <nil>}
I0210 11:17:18.576918 802973 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-822142 && echo "embed-certs-822142" | sudo tee /etc/hostname
I0210 11:17:18.714590 802973 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-822142
I0210 11:17:18.714668 802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
I0210 11:17:18.733236 802973 main.go:141] libmachine: Using SSH client type: native
I0210 11:17:18.733551 802973 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x4149b0] 0x4171f0 <nil> [] 0s} 127.0.0.1 33808 <nil> <nil>}
I0210 11:17:18.733569 802973 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-822142' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-822142/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-822142' | sudo tee -a /etc/hosts;
fi
fi
I0210 11:17:18.857672 802973 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0210 11:17:18.857766 802973 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20385-576242/.minikube CaCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20385-576242/.minikube}
I0210 11:17:18.857801 802973 ubuntu.go:177] setting up certificates
I0210 11:17:18.857835 802973 provision.go:84] configureAuth start
I0210 11:17:18.857920 802973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-822142
I0210 11:17:18.874240 802973 provision.go:143] copyHostCerts
I0210 11:17:18.874306 802973 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem, removing ...
I0210 11:17:18.874319 802973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem
I0210 11:17:18.874395 802973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/ca.pem (1078 bytes)
I0210 11:17:18.874498 802973 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem, removing ...
I0210 11:17:18.874510 802973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem
I0210 11:17:18.874545 802973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/cert.pem (1123 bytes)
I0210 11:17:18.874613 802973 exec_runner.go:144] found /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem, removing ...
I0210 11:17:18.874626 802973 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem
I0210 11:17:18.874652 802973 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20385-576242/.minikube/key.pem (1679 bytes)
I0210 11:17:18.874714 802973 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem org=jenkins.embed-certs-822142 san=[127.0.0.1 192.168.85.2 embed-certs-822142 localhost minikube]
I0210 11:17:20.464640 802973 provision.go:177] copyRemoteCerts
I0210 11:17:20.464758 802973 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0210 11:17:20.464863 802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
I0210 11:17:20.483058 802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
I0210 11:17:20.574272 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0210 11:17:20.599120 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0210 11:17:20.624908 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0210 11:17:20.650939 802973 provision.go:87] duration metric: took 1.793072901s to configureAuth
I0210 11:17:20.651006 802973 ubuntu.go:193] setting minikube options for container-runtime
I0210 11:17:20.651198 802973 config.go:182] Loaded profile config "embed-certs-822142": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.32.1
I0210 11:17:20.651214 802973 machine.go:96] duration metric: took 5.259824852s to provisionDockerMachine
I0210 11:17:20.651222 802973 client.go:171] duration metric: took 12.599644812s to LocalClient.Create
I0210 11:17:20.651237 802973 start.go:167] duration metric: took 12.59971655s to libmachine.API.Create "embed-certs-822142"
I0210 11:17:20.651244 802973 start.go:293] postStartSetup for "embed-certs-822142" (driver="docker")
I0210 11:17:20.651253 802973 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0210 11:17:20.651310 802973 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0210 11:17:20.651366 802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
I0210 11:17:20.668300 802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
I0210 11:17:20.759359 802973 ssh_runner.go:195] Run: cat /etc/os-release
I0210 11:17:20.762884 802973 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0210 11:17:20.762964 802973 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0210 11:17:20.762980 802973 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0210 11:17:20.762989 802973 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0210 11:17:20.763003 802973 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-576242/.minikube/addons for local assets ...
I0210 11:17:20.763074 802973 filesync.go:126] Scanning /home/jenkins/minikube-integration/20385-576242/.minikube/files for local assets ...
I0210 11:17:20.763160 802973 filesync.go:149] local asset: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem -> 5816292.pem in /etc/ssl/certs
I0210 11:17:20.763274 802973 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0210 11:17:20.772370 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem --> /etc/ssl/certs/5816292.pem (1708 bytes)
I0210 11:17:20.805302 802973 start.go:296] duration metric: took 154.043074ms for postStartSetup
I0210 11:17:20.805727 802973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-822142
I0210 11:17:20.822895 802973 profile.go:143] Saving config to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/config.json ...
I0210 11:17:20.823186 802973 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0210 11:17:20.823242 802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
I0210 11:17:20.842298 802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
I0210 11:17:20.935187 802973 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0210 11:17:20.942650 802973 start.go:128] duration metric: took 12.894947282s to createHost
I0210 11:17:20.942677 802973 start.go:83] releasing machines lock for "embed-certs-822142", held for 12.895083047s
I0210 11:17:20.942752 802973 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-822142
I0210 11:17:20.960968 802973 ssh_runner.go:195] Run: cat /version.json
I0210 11:17:20.961035 802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
I0210 11:17:20.961285 802973 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0210 11:17:20.961341 802973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-822142
I0210 11:17:20.983437 802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
I0210 11:17:21.001874 802973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33808 SSHKeyPath:/home/jenkins/minikube-integration/20385-576242/.minikube/machines/embed-certs-822142/id_rsa Username:docker}
I0210 11:17:21.207532 802973 ssh_runner.go:195] Run: systemctl --version
I0210 11:17:21.212000 802973 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0210 11:17:21.216353 802973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0210 11:17:21.241271 802973 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0210 11:17:21.241363 802973 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0210 11:17:21.273628 802973 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0210 11:17:21.273651 802973 start.go:495] detecting cgroup driver to use...
I0210 11:17:21.273686 802973 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0210 11:17:21.273756 802973 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0210 11:17:21.288378 802973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0210 11:17:21.300136 802973 docker.go:217] disabling cri-docker service (if available) ...
I0210 11:17:21.300201 802973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0210 11:17:21.314350 802973 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0210 11:17:21.329171 802973 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0210 11:17:21.424717 802973 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0210 11:17:21.520459 802973 docker.go:233] disabling docker service ...
I0210 11:17:21.520570 802973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0210 11:17:21.542235 802973 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0210 11:17:21.554458 802973 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0210 11:17:21.663365 802973 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0210 11:17:21.787765 802973 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0210 11:17:21.800304 802973 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0210 11:17:21.817081 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0210 11:17:21.826976 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0210 11:17:21.838768 802973 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0210 11:17:21.838871 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0210 11:17:21.852386 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0210 11:17:21.864843 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0210 11:17:21.875736 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0210 11:17:21.886867 802973 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0210 11:17:21.898979 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0210 11:17:21.911124 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0210 11:17:21.922538 802973 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0210 11:17:21.932877 802973 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0210 11:17:21.949614 802973 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0210 11:17:21.963841 802973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0210 11:17:22.090852 802973 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0210 11:17:22.290969 802973 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0210 11:17:22.291094 802973 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0210 11:17:22.295095 802973 start.go:563] Will wait 60s for crictl version
I0210 11:17:22.295206 802973 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.298662 802973 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0210 11:17:22.365585 802973 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.24
RuntimeApiVersion: v1
I0210 11:17:22.365696 802973 ssh_runner.go:195] Run: containerd --version
I0210 11:17:22.395971 802973 ssh_runner.go:195] Run: containerd --version
I0210 11:17:22.428516 802973 out.go:177] * Preparing Kubernetes v1.32.1 on containerd 1.7.24 ...
I0210 11:17:22.431522 802973 cli_runner.go:164] Run: docker network inspect embed-certs-822142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0210 11:17:22.453089 802973 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0210 11:17:22.457029 802973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0210 11:17:22.468908 802973 kubeadm.go:883] updating cluster {Name:embed-certs-822142 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0210 11:17:22.469045 802973 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime containerd
I0210 11:17:22.469115 802973 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 11:17:22.524876 802973 containerd.go:627] all images are preloaded for containerd runtime.
I0210 11:17:22.524910 802973 containerd.go:534] Images already preloaded, skipping extraction
I0210 11:17:22.524971 802973 ssh_runner.go:195] Run: sudo crictl images --output json
I0210 11:17:22.596641 802973 containerd.go:627] all images are preloaded for containerd runtime.
I0210 11:17:22.596661 802973 cache_images.go:84] Images are preloaded, skipping loading
I0210 11:17:22.596669 802973 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.1 containerd true true} ...
I0210 11:17:22.596774 802973 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-822142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0210 11:17:22.596836 802973 ssh_runner.go:195] Run: sudo crictl info
I0210 11:17:22.651446 802973 cni.go:84] Creating CNI manager for ""
I0210 11:17:22.651522 802973 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0210 11:17:22.651546 802973 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0210 11:17:22.651603 802973 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-822142 NodeName:embed-certs-822142 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0210 11:17:22.651773 802973 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-822142"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.1
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0210 11:17:22.651875 802973 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
I0210 11:17:22.662804 802973 binaries.go:44] Found k8s binaries, skipping transfer
I0210 11:17:22.662879 802973 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0210 11:17:22.678027 802973 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0210 11:17:22.703803 802973 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0210 11:17:22.726960 802973 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I0210 11:17:22.750290 802973 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0210 11:17:22.754309 802973 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0210 11:17:22.766121 802973 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0210 11:17:22.885745 802973 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0210 11:17:22.906878 802973 certs.go:68] Setting up /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142 for IP: 192.168.85.2
I0210 11:17:22.906905 802973 certs.go:194] generating shared ca certs ...
I0210 11:17:22.906921 802973 certs.go:226] acquiring lock for ca certs: {Name:mk41210dcb5a25827819de2f65fc930debb2adb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:17:22.907058 802973 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20385-576242/.minikube/ca.key
I0210 11:17:22.907099 802973 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.key
I0210 11:17:22.907106 802973 certs.go:256] generating profile certs ...
I0210 11:17:22.907160 802973 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.key
I0210 11:17:22.907172 802973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.crt with IP's: []
I0210 11:17:23.307894 802973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.crt ...
I0210 11:17:23.307920 802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.crt: {Name:mk8870791c3c3973168792207acd9eb0b2a40a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:17:23.308866 802973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.key ...
I0210 11:17:23.308914 802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/client.key: {Name:mk604a037a81a7ea58f5afe10f1a089ed594d3f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:17:23.309082 802973 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key.7c314cda
I0210 11:17:23.309123 802973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt.7c314cda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0210 11:17:24.339512 802973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt.7c314cda ...
I0210 11:17:24.339543 802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt.7c314cda: {Name:mkd5e2b191a7961291802da6ed354ef008572159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:17:24.340319 802973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key.7c314cda ...
I0210 11:17:24.340340 802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key.7c314cda: {Name:mk0c36bcdd91c971ea900fe7bc5d35c59eb31924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:17:24.340441 802973 certs.go:381] copying /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt.7c314cda -> /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt
I0210 11:17:24.340530 802973 certs.go:385] copying /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key.7c314cda -> /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key
I0210 11:17:24.340594 802973 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.key
I0210 11:17:24.340613 802973 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.crt with IP's: []
I0210 11:17:24.743254 802973 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.crt ...
I0210 11:17:24.743290 802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.crt: {Name:mkb1c131a1fbe9c90e39e56039ffe4956412086c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:17:24.744118 802973 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.key ...
I0210 11:17:24.744136 802973 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.key: {Name:mk0e22431673b3045ab42d984b949c91818da058 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0210 11:17:24.744339 802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629.pem (1338 bytes)
W0210 11:17:24.744388 802973 certs.go:480] ignoring /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629_empty.pem, impossibly tiny 0 bytes
I0210 11:17:24.744403 802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca-key.pem (1679 bytes)
I0210 11:17:24.744430 802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/ca.pem (1078 bytes)
I0210 11:17:24.744458 802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/cert.pem (1123 bytes)
I0210 11:17:24.744485 802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/certs/key.pem (1679 bytes)
I0210 11:17:24.744531 802973 certs.go:484] found cert: /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem (1708 bytes)
I0210 11:17:24.745141 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0210 11:17:24.770589 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0210 11:17:24.795510 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0210 11:17:24.820316 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0210 11:17:24.845805 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0210 11:17:24.871079 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0210 11:17:24.899257 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0210 11:17:24.924247 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/profiles/embed-certs-822142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0210 11:17:24.958799 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/certs/581629.pem --> /usr/share/ca-certificates/581629.pem (1338 bytes)
I0210 11:17:24.984660 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/files/etc/ssl/certs/5816292.pem --> /usr/share/ca-certificates/5816292.pem (1708 bytes)
I0210 11:17:25.016620 802973 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20385-576242/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0210 11:17:25.044496 802973 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0210 11:17:25.063390 802973 ssh_runner.go:195] Run: openssl version
I0210 11:17:25.069533 802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0210 11:17:25.080132 802973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0210 11:17:25.084150 802973 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Feb 10 10:25 /usr/share/ca-certificates/minikubeCA.pem
I0210 11:17:25.084253 802973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0210 11:17:25.092625 802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0210 11:17:25.104621 802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/581629.pem && ln -fs /usr/share/ca-certificates/581629.pem /etc/ssl/certs/581629.pem"
I0210 11:17:25.114512 802973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/581629.pem
I0210 11:17:25.118254 802973 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Feb 10 10:32 /usr/share/ca-certificates/581629.pem
I0210 11:17:25.118353 802973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/581629.pem
I0210 11:17:25.125148 802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/581629.pem /etc/ssl/certs/51391683.0"
I0210 11:17:25.136002 802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5816292.pem && ln -fs /usr/share/ca-certificates/5816292.pem /etc/ssl/certs/5816292.pem"
I0210 11:17:25.147305 802973 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5816292.pem
I0210 11:17:25.151156 802973 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Feb 10 10:32 /usr/share/ca-certificates/5816292.pem
I0210 11:17:25.151256 802973 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5816292.pem
I0210 11:17:25.158821 802973 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5816292.pem /etc/ssl/certs/3ec20f2e.0"
I0210 11:17:25.169927 802973 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0210 11:17:25.173496 802973 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0210 11:17:25.173644 802973 kubeadm.go:392] StartCluster: {Name:embed-certs-822142 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:embed-certs-822142 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0210 11:17:25.173767 802973 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0210 11:17:25.173836 802973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0210 11:17:25.213573 802973 cri.go:89] found id: ""
I0210 11:17:25.213650 802973 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0210 11:17:25.222988 802973 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0210 11:17:25.232053 802973 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0210 11:17:25.232182 802973 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0210 11:17:25.242258 802973 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0210 11:17:25.242281 802973 kubeadm.go:157] found existing configuration files:
I0210 11:17:25.242353 802973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0210 11:17:25.251250 802973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0210 11:17:25.251367 802973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0210 11:17:25.260016 802973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0210 11:17:25.269339 802973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0210 11:17:25.269406 802973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0210 11:17:25.279339 802973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0210 11:17:25.291878 802973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0210 11:17:25.291990 802973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0210 11:17:25.301917 802973 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0210 11:17:25.312112 802973 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0210 11:17:25.312225 802973 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0210 11:17:25.333270 802973 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0210 11:17:25.408443 802973 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
I0210 11:17:25.408568 802973 kubeadm.go:310] [preflight] Running pre-flight checks
I0210 11:17:25.437214 802973 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0210 11:17:25.437350 802973 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1075-aws[0m
I0210 11:17:25.437425 802973 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0210 11:17:25.437534 802973 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0210 11:17:25.437615 802973 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0210 11:17:25.437701 802973 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0210 11:17:25.437786 802973 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0210 11:17:25.437871 802973 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0210 11:17:25.437953 802973 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0210 11:17:25.438027 802973 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0210 11:17:25.438105 802973 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0210 11:17:25.438187 802973 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0210 11:17:25.509763 802973 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0210 11:17:25.509916 802973 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0210 11:17:25.510030 802973 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I0210 11:17:25.518002 802973 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0210 11:17:21.633459 792122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0210 11:17:21.646627 792122 api_server.go:72] duration metric: took 5m57.197162359s to wait for apiserver process to appear ...
I0210 11:17:21.646652 792122 api_server.go:88] waiting for apiserver healthz status ...
I0210 11:17:21.646689 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0210 11:17:21.646747 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0210 11:17:21.702943 792122 cri.go:89] found id: "ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
I0210 11:17:21.702968 792122 cri.go:89] found id: "04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:17:21.702974 792122 cri.go:89] found id: ""
I0210 11:17:21.702981 792122 logs.go:282] 2 containers: [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01]
I0210 11:17:21.703043 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.706808 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.711614 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0210 11:17:21.711686 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0210 11:17:21.769142 792122 cri.go:89] found id: "4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
I0210 11:17:21.769166 792122 cri.go:89] found id: "3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:17:21.769171 792122 cri.go:89] found id: ""
I0210 11:17:21.769178 792122 logs.go:282] 2 containers: [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba]
I0210 11:17:21.769231 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.772814 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.776371 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0210 11:17:21.776467 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0210 11:17:21.835068 792122 cri.go:89] found id: "23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
I0210 11:17:21.835099 792122 cri.go:89] found id: "a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:17:21.835105 792122 cri.go:89] found id: ""
I0210 11:17:21.835112 792122 logs.go:282] 2 containers: [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d]
I0210 11:17:21.835205 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.839601 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.843809 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0210 11:17:21.843906 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0210 11:17:21.894020 792122 cri.go:89] found id: "2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
I0210 11:17:21.894042 792122 cri.go:89] found id: "8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:17:21.894047 792122 cri.go:89] found id: ""
I0210 11:17:21.894054 792122 logs.go:282] 2 containers: [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd]
I0210 11:17:21.894151 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.898071 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.902515 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0210 11:17:21.902616 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0210 11:17:21.980105 792122 cri.go:89] found id: "2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
I0210 11:17:21.980138 792122 cri.go:89] found id: "6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:17:21.980144 792122 cri.go:89] found id: ""
I0210 11:17:21.980151 792122 logs.go:282] 2 containers: [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1]
I0210 11:17:21.980235 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.984322 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:21.987666 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0210 11:17:21.987780 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0210 11:17:22.059620 792122 cri.go:89] found id: "aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
I0210 11:17:22.059644 792122 cri.go:89] found id: "d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:17:22.059649 792122 cri.go:89] found id: ""
I0210 11:17:22.059658 792122 logs.go:282] 2 containers: [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d]
I0210 11:17:22.059744 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.063872 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.067934 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0210 11:17:22.068028 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0210 11:17:22.120294 792122 cri.go:89] found id: "63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
I0210 11:17:22.120314 792122 cri.go:89] found id: "9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:17:22.120319 792122 cri.go:89] found id: ""
I0210 11:17:22.120326 792122 logs.go:282] 2 containers: [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4]
I0210 11:17:22.120379 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.124012 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.133616 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0210 11:17:22.133685 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0210 11:17:22.193873 792122 cri.go:89] found id: "b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
I0210 11:17:22.193892 792122 cri.go:89] found id: "221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
I0210 11:17:22.193897 792122 cri.go:89] found id: ""
I0210 11:17:22.193904 792122 logs.go:282] 2 containers: [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525]
I0210 11:17:22.193959 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.197703 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.201260 792122 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0210 11:17:22.201380 792122 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0210 11:17:22.252446 792122 cri.go:89] found id: "6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
I0210 11:17:22.252510 792122 cri.go:89] found id: ""
I0210 11:17:22.252533 792122 logs.go:282] 1 containers: [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7]
I0210 11:17:22.252606 792122 ssh_runner.go:195] Run: which crictl
I0210 11:17:22.256456 792122 logs.go:123] Gathering logs for kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] ...
I0210 11:17:22.256522 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd"
I0210 11:17:22.319127 792122 logs.go:123] Gathering logs for kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] ...
I0210 11:17:22.319197 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708"
I0210 11:17:22.371929 792122 logs.go:123] Gathering logs for kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] ...
I0210 11:17:22.371996 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1"
I0210 11:17:22.419946 792122 logs.go:123] Gathering logs for storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] ...
I0210 11:17:22.420016 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84"
I0210 11:17:22.500193 792122 logs.go:123] Gathering logs for describe nodes ...
I0210 11:17:22.500219 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0210 11:17:22.689017 792122 logs.go:123] Gathering logs for coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] ...
I0210 11:17:22.689049 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198"
I0210 11:17:22.771016 792122 logs.go:123] Gathering logs for kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] ...
I0210 11:17:22.771047 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff"
I0210 11:17:22.833433 792122 logs.go:123] Gathering logs for kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] ...
I0210 11:17:22.833464 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870"
I0210 11:17:22.899680 792122 logs.go:123] Gathering logs for containerd ...
I0210 11:17:22.899757 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0210 11:17:22.995820 792122 logs.go:123] Gathering logs for dmesg ...
I0210 11:17:22.995915 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0210 11:17:23.022911 792122 logs.go:123] Gathering logs for etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] ...
I0210 11:17:23.022939 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587"
I0210 11:17:23.088083 792122 logs.go:123] Gathering logs for coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] ...
I0210 11:17:23.088257 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d"
I0210 11:17:23.173762 792122 logs.go:123] Gathering logs for container status ...
I0210 11:17:23.173836 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0210 11:17:23.230605 792122 logs.go:123] Gathering logs for kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] ...
I0210 11:17:23.230682 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1"
I0210 11:17:23.306650 792122 logs.go:123] Gathering logs for kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] ...
I0210 11:17:23.306724 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01"
I0210 11:17:23.388460 792122 logs.go:123] Gathering logs for kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] ...
I0210 11:17:23.388501 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7"
I0210 11:17:23.442850 792122 logs.go:123] Gathering logs for kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] ...
I0210 11:17:23.442879 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d"
I0210 11:17:23.569314 792122 logs.go:123] Gathering logs for kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] ...
I0210 11:17:23.569354 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4"
I0210 11:17:23.623310 792122 logs.go:123] Gathering logs for storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] ...
I0210 11:17:23.623338 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525"
I0210 11:17:23.669314 792122 logs.go:123] Gathering logs for kubelet ...
I0210 11:17:23.669343 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0210 11:17:23.735791 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.495697 665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.736086 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496065 665 reflector.go:138] object-"kube-system"/"coredns-token-7cchl": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-7cchl" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.736428 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496388 665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-r7rrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-r7rrz" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.736690 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.496738 665 reflector.go:138] object-"default"/"default-token-q8wzb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-q8wzb" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.736907 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.500993 665 reflector.go:138] object-"kube-system"/"kindnet-token-h7brt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-h7brt" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.737110 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501261 665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.737331 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501486 665 reflector.go:138] object-"kube-system"/"metrics-server-token-pddsx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-pddsx" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.737557 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:42 old-k8s-version-705847 kubelet[665]: E0210 11:11:42.501700 665 reflector.go:138] object-"kube-system"/"kube-proxy-token-92pf5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-92pf5" is forbidden: User "system:node:old-k8s-version-705847" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-705847' and this object
W0210 11:17:23.744448 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:43 old-k8s-version-705847 kubelet[665]: E0210 11:11:43.988520 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.744641 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:44 old-k8s-version-705847 kubelet[665]: E0210 11:11:44.494625 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.748240 792122 logs.go:138] Found kubelet problem: Feb 10 11:11:57 old-k8s-version-705847 kubelet[665]: E0210 11:11:57.176598 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.750410 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:06 old-k8s-version-705847 kubelet[665]: E0210 11:12:06.587650 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.750747 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:07 old-k8s-version-705847 kubelet[665]: E0210 11:12:07.588161 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.750932 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:12 old-k8s-version-705847 kubelet[665]: E0210 11:12:12.166247 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.751597 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:13 old-k8s-version-705847 kubelet[665]: E0210 11:12:13.359119 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.752034 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:15 old-k8s-version-705847 kubelet[665]: E0210 11:12:15.615500 665 pod_workers.go:191] Error syncing pod 9fb88c78-7e13-4c39-b861-6a75febd2f29 ("storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(9fb88c78-7e13-4c39-b861-6a75febd2f29)"
W0210 11:17:23.752959 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:24 old-k8s-version-705847 kubelet[665]: E0210 11:12:24.650563 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.755482 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:26 old-k8s-version-705847 kubelet[665]: E0210 11:12:26.179066 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.755947 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:33 old-k8s-version-705847 kubelet[665]: E0210 11:12:33.359712 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.756133 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:38 old-k8s-version-705847 kubelet[665]: E0210 11:12:38.166028 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.756462 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:44 old-k8s-version-705847 kubelet[665]: E0210 11:12:44.165493 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.756668 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:52 old-k8s-version-705847 kubelet[665]: E0210 11:12:52.166523 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.757257 792122 logs.go:138] Found kubelet problem: Feb 10 11:12:58 old-k8s-version-705847 kubelet[665]: E0210 11:12:58.763561 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.757662 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:03 old-k8s-version-705847 kubelet[665]: E0210 11:13:03.358918 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.757866 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:06 old-k8s-version-705847 kubelet[665]: E0210 11:13:06.166020 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.758208 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:15 old-k8s-version-705847 kubelet[665]: E0210 11:13:15.165381 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.760718 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:20 old-k8s-version-705847 kubelet[665]: E0210 11:13:20.182857 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.761078 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:27 old-k8s-version-705847 kubelet[665]: E0210 11:13:27.165926 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.761277 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:34 old-k8s-version-705847 kubelet[665]: E0210 11:13:34.166696 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.761645 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:38 old-k8s-version-705847 kubelet[665]: E0210 11:13:38.165396 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.761831 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:46 old-k8s-version-705847 kubelet[665]: E0210 11:13:46.167918 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.762418 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:51 old-k8s-version-705847 kubelet[665]: E0210 11:13:51.896354 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.762750 792122 logs.go:138] Found kubelet problem: Feb 10 11:13:53 old-k8s-version-705847 kubelet[665]: E0210 11:13:53.359574 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.762936 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:00 old-k8s-version-705847 kubelet[665]: E0210 11:14:00.171443 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.763264 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:05 old-k8s-version-705847 kubelet[665]: E0210 11:14:05.165923 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.763453 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:12 old-k8s-version-705847 kubelet[665]: E0210 11:14:12.166921 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.763802 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:17 old-k8s-version-705847 kubelet[665]: E0210 11:14:17.165864 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.763989 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:26 old-k8s-version-705847 kubelet[665]: E0210 11:14:26.165733 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.764385 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:31 old-k8s-version-705847 kubelet[665]: E0210 11:14:31.165921 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.764574 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:40 old-k8s-version-705847 kubelet[665]: E0210 11:14:40.166598 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.764916 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:46 old-k8s-version-705847 kubelet[665]: E0210 11:14:46.165929 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.767429 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:55 old-k8s-version-705847 kubelet[665]: E0210 11:14:55.174499 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0210 11:17:23.767825 792122 logs.go:138] Found kubelet problem: Feb 10 11:14:58 old-k8s-version-705847 kubelet[665]: E0210 11:14:58.165404 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.768160 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:09 old-k8s-version-705847 kubelet[665]: E0210 11:15:09.165442 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.768346 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:10 old-k8s-version-705847 kubelet[665]: E0210 11:15:10.167175 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.768960 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:21 old-k8s-version-705847 kubelet[665]: E0210 11:15:21.169488 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.769151 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:22 old-k8s-version-705847 kubelet[665]: E0210 11:15:22.173180 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.769564 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:23 old-k8s-version-705847 kubelet[665]: E0210 11:15:23.359507 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.769768 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:35 old-k8s-version-705847 kubelet[665]: E0210 11:15:35.165986 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.770114 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:36 old-k8s-version-705847 kubelet[665]: E0210 11:15:36.165557 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.770306 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.770642 792122 logs.go:138] Found kubelet problem: Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.770826 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.771154 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.771340 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.771666 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.771864 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.772192 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.772377 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.772703 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.772887 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.773213 792122 logs.go:138] Found kubelet problem: Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.773398 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.773731 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.774137 792122 logs.go:138] Found kubelet problem: Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0210 11:17:23.774168 792122 logs.go:123] Gathering logs for etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] ...
I0210 11:17:23.774184 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba"
I0210 11:17:23.845922 792122 logs.go:123] Gathering logs for kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] ...
I0210 11:17:23.845950 792122 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa"
I0210 11:17:23.939309 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:23.939399 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0210 11:17:23.939494 792122 out.go:270] X Problems detected in kubelet:
W0210 11:17:23.939681 792122 out.go:270] Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.939739 792122 out.go:270] Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.939779 792122 out.go:270] Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0210 11:17:23.939835 792122 out.go:270] Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
W0210 11:17:23.939870 792122 out.go:270] Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0210 11:17:23.939920 792122 out.go:358] Setting ErrFile to fd 2...
I0210 11:17:23.939941 792122 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0210 11:17:25.524248 802973 out.go:235] - Generating certificates and keys ...
I0210 11:17:25.524400 802973 kubeadm.go:310] [certs] Using existing ca certificate authority
I0210 11:17:25.524480 802973 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0210 11:17:26.278242 802973 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0210 11:17:26.978612 802973 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0210 11:17:27.433595 802973 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0210 11:17:28.026034 802973 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0210 11:17:28.243561 802973 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0210 11:17:28.243900 802973 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-822142 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0210 11:17:28.582828 802973 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0210 11:17:28.583209 802973 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-822142 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0210 11:17:29.578654 802973 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0210 11:17:30.184922 802973 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0210 11:17:30.470498 802973 kubeadm.go:310] [certs] Generating "sa" key and public key
I0210 11:17:30.470801 802973 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0210 11:17:31.196190 802973 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0210 11:17:32.138487 802973 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
I0210 11:17:32.294362 802973 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0210 11:17:32.615934 802973 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0210 11:17:33.360822 802973 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0210 11:17:33.361413 802973 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0210 11:17:33.364301 802973 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0210 11:17:33.941594 792122 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0210 11:17:33.966671 792122 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0210 11:17:33.970082 792122 out.go:201]
W0210 11:17:33.973071 792122 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0210 11:17:33.973117 792122 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0210 11:17:33.973146 792122 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0210 11:17:33.973158 792122 out.go:270] *
W0210 11:17:33.974109 792122 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0210 11:17:33.977004 792122 out.go:201]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
b6b6d099aaf4c 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 5071f6b766b05 dashboard-metrics-scraper-8d5bb5db8-r58kw
b7ef8424fcbcb ba04bb24b9575 5 minutes ago Running storage-provisioner 2 9808bf553cd91 storage-provisioner
6c8852ecb1c21 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 89f5de618308e kubernetes-dashboard-cd95d586-s9bfz
2517ca7acc440 25a5233254979 5 minutes ago Running kube-proxy 1 252c890a5cc31 kube-proxy-qt8rk
221dcab82eb8d ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 9808bf553cd91 storage-provisioner
23929f63f011f db91994f4ee8f 5 minutes ago Running coredns 1 c93197f6b1902 coredns-74ff55c5b-7fkgl
63daa6ac11e65 e1181ee320546 5 minutes ago Running kindnet-cni 1 4389549f8ec5e kindnet-l58wz
bcea7e59a62ef 1611cd07b61d5 5 minutes ago Running busybox 1 e5aeb29af6eac busybox
2ce24aaa2eea1 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 356a6949169e7 kube-scheduler-old-k8s-version-705847
aec35b105aa1d 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 6695d802a859f kube-controller-manager-old-k8s-version-705847
ad6d38edf5bc8 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 0b14db362d48d kube-apiserver-old-k8s-version-705847
4087c4b9c5558 05b738aa1bc63 6 minutes ago Running etcd 1 6c10fc40769a8 etcd-old-k8s-version-705847
a664cfa85004d 1611cd07b61d5 6 minutes ago Exited busybox 0 ee1e0628d275e busybox
a122c6cf80f3c db91994f4ee8f 8 minutes ago Exited coredns 0 9e0a78abac64a coredns-74ff55c5b-7fkgl
9db35ce7df6ab e1181ee320546 8 minutes ago Exited kindnet-cni 0 f3753ff4ce47e kindnet-l58wz
6d39bdbc1d81b 25a5233254979 8 minutes ago Exited kube-proxy 0 c12baeac0cbf9 kube-proxy-qt8rk
d49223327cb59 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 5e708f9c1252f kube-controller-manager-old-k8s-version-705847
04c0549198596 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 f8ea3c3202fd4 kube-apiserver-old-k8s-version-705847
3fd7073fac25b 05b738aa1bc63 8 minutes ago Exited etcd 0 27f1abb37b719 etcd-old-k8s-version-705847
8d3d8d966ae37 e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 214ceeb2fd30c kube-scheduler-old-k8s-version-705847
==> containerd <==
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.173940146Z" level=info msg="CreateContainer within sandbox \"5071f6b766b058cd6cc8b1e44170820d9e60ad007cca015ea3d2cd1af965c68c\" for container name:\"dashboard-metrics-scraper\" attempt:4"
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.199089636Z" level=info msg="CreateContainer within sandbox \"5071f6b766b058cd6cc8b1e44170820d9e60ad007cca015ea3d2cd1af965c68c\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\""
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.199691913Z" level=info msg="StartContainer for \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\""
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.269620863Z" level=info msg="StartContainer for \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\" returns successfully"
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.269710922Z" level=info msg="received exit event container_id:\"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\" id:\"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\" pid:3046 exit_status:255 exited_at:{seconds:1739186031 nanos:267395570}"
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.295129777Z" level=info msg="shim disconnected" id=5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436 namespace=k8s.io
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.295201423Z" level=warning msg="cleaning up after shim disconnected" id=5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436 namespace=k8s.io
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.295211713Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.898811613Z" level=info msg="RemoveContainer for \"0f075831770b096e8f8915b0d5950ed20018a644c353cf93ace661bad0f72c56\""
Feb 10 11:13:51 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:13:51.906262996Z" level=info msg="RemoveContainer for \"0f075831770b096e8f8915b0d5950ed20018a644c353cf93ace661bad0f72c56\" returns successfully"
Feb 10 11:14:55 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:14:55.166373189Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:14:55 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:14:55.171964733Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Feb 10 11:14:55 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:14:55.174007939Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Feb 10 11:14:55 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:14:55.174108287Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.168644673Z" level=info msg="CreateContainer within sandbox \"5071f6b766b058cd6cc8b1e44170820d9e60ad007cca015ea3d2cd1af965c68c\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.191117048Z" level=info msg="CreateContainer within sandbox \"5071f6b766b058cd6cc8b1e44170820d9e60ad007cca015ea3d2cd1af965c68c\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\""
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.191944244Z" level=info msg="StartContainer for \"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\""
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.257124435Z" level=info msg="StartContainer for \"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\" returns successfully"
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.258610499Z" level=info msg="received exit event container_id:\"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\" id:\"b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6\" pid:3300 exit_status:255 exited_at:{seconds:1739186120 nanos:258326332}"
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.282252961Z" level=info msg="shim disconnected" id=b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6 namespace=k8s.io
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.282314975Z" level=warning msg="cleaning up after shim disconnected" id=b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6 namespace=k8s.io
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.282457390Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 10 11:15:20 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:20.293959895Z" level=warning msg="cleanup warnings time=\"2025-02-10T11:15:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 10 11:15:21 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:21.174400712Z" level=info msg="RemoveContainer for \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\""
Feb 10 11:15:21 old-k8s-version-705847 containerd[571]: time="2025-02-10T11:15:21.182992741Z" level=info msg="RemoveContainer for \"5f128a398c3be28669ed8e4bccae395cbf38d01ba05f7d820a8973b77a3f9436\" returns successfully"
==> coredns [23929f63f011fe68f4a6aabb0ae06894e78df3b3b49e1fcb8d6a726e40b52198] <==
[INFO] plugin/ready: Still waiting on: "kubernetes"
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:34035 - 28803 "HINFO IN 5542394030849349071.1439891449229212650. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022263539s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0210 11:12:15.168436 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-10 11:11:45.16787272 +0000 UTC m=+0.048757483) (total time: 30.000457611s):
Trace[2019727887]: [30.000457611s] [30.000457611s] END
E0210 11:12:15.168469 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0210 11:12:15.168816 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-10 11:11:45.168386802 +0000 UTC m=+0.049271565) (total time: 30.000398214s):
Trace[939984059]: [30.000398214s] [30.000398214s] END
E0210 11:12:15.168890 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0210 11:12:15.168839 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-02-10 11:11:45.168634201 +0000 UTC m=+0.049518981) (total time: 30.000187868s):
Trace[911902081]: [30.000187868s] [30.000187868s] END
E0210 11:12:15.168956 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [a122c6cf80f3c6dea3c35c0505487ee4b7c354532b5b34cbab907409441efb8d] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:36861 - 19880 "HINFO IN 8376676006252621669.5710798605214046510. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031573941s
==> describe nodes <==
Name: old-k8s-version-705847
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-705847
kubernetes.io/os=linux
minikube.k8s.io/commit=a597502568cd649748018b4cfeb698a4b8b36160
minikube.k8s.io/name=old-k8s-version-705847
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_02_10T11_08_54_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 10 Feb 2025 11:08:51 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-705847
AcquireTime: <unset>
RenewTime: Mon, 10 Feb 2025 11:17:35 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 10 Feb 2025 11:17:35 +0000 Mon, 10 Feb 2025 11:08:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 10 Feb 2025 11:17:35 +0000 Mon, 10 Feb 2025 11:08:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 10 Feb 2025 11:17:35 +0000 Mon, 10 Feb 2025 11:08:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 10 Feb 2025 11:17:35 +0000 Mon, 10 Feb 2025 11:09:09 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-705847
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022304Ki
pods: 110
System Info:
Machine ID: 6249c507abed417499f16b655cb9a80c
System UUID: 8b83af55-3f47-4d24-9d6d-e0877947e999
Boot ID: 562c7f3c-b16a-445a-b1a8-6d6932d5b74d
Kernel Version: 5.15.0-1075-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.24
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m46s
kube-system coredns-74ff55c5b-7fkgl 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m27s
kube-system etcd-old-k8s-version-705847 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m34s
kube-system kindnet-l58wz 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m27s
kube-system kube-apiserver-old-k8s-version-705847 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m34s
kube-system kube-controller-manager-old-k8s-version-705847 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m34s
kube-system kube-proxy-qt8rk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m27s
kube-system kube-scheduler-old-k8s-version-705847 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m34s
kube-system metrics-server-9975d5f86-nvn7z 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m33s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m25s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-r58kw 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m37s
kubernetes-dashboard kubernetes-dashboard-cd95d586-s9bfz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m37s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m53s (x5 over 8m53s) kubelet Node old-k8s-version-705847 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m53s (x5 over 8m53s) kubelet Node old-k8s-version-705847 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m53s (x4 over 8m53s) kubelet Node old-k8s-version-705847 status is now: NodeHasSufficientPID
Normal Starting 8m34s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m34s kubelet Node old-k8s-version-705847 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m34s kubelet Node old-k8s-version-705847 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m34s kubelet Node old-k8s-version-705847 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m34s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m27s kubelet Node old-k8s-version-705847 status is now: NodeReady
Normal Starting 8m26s kube-proxy Starting kube-proxy.
Normal Starting 6m5s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m4s (x8 over 6m4s) kubelet Node old-k8s-version-705847 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m4s (x8 over 6m4s) kubelet Node old-k8s-version-705847 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m4s (x7 over 6m4s) kubelet Node old-k8s-version-705847 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m4s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m50s kube-proxy Starting kube-proxy.
==> dmesg <==
==> etcd [3fd7073fac25b6b40b5baa29cc64cc453b683017d36d2d19e5d9564105a11dba] <==
raft2025/02/10 11:08:44 INFO: ea7e25599daad906 became candidate at term 2
raft2025/02/10 11:08:44 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2025/02/10 11:08:44 INFO: ea7e25599daad906 became leader at term 2
raft2025/02/10 11:08:44 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-02-10 11:08:44.909123 I | etcdserver: setting up the initial cluster version to 3.4
2025-02-10 11:08:44.909450 I | etcdserver: published {Name:old-k8s-version-705847 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-02-10 11:08:44.916656 I | embed: ready to serve client requests
2025-02-10 11:08:44.917033 I | embed: ready to serve client requests
2025-02-10 11:08:44.918500 I | embed: serving client requests on 127.0.0.1:2379
2025-02-10 11:08:44.925626 I | embed: serving client requests on 192.168.76.2:2379
2025-02-10 11:08:44.936005 N | etcdserver/membership: set the initial cluster version to 3.4
2025-02-10 11:08:44.940823 I | etcdserver/api: enabled capabilities for version 3.4
2025-02-10 11:09:05.286622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:09:11.121280 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:09:21.121462 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:09:31.121619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:09:41.121327 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:09:51.121478 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:10:01.121391 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:10:11.121346 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:10:21.121333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:10:31.121561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:10:41.121402 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:10:51.121337 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:11:01.122117 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [4087c4b9c555892c1681e052080187a74e6cc1dc0290f6051f84747aefc69587] <==
2025-02-10 11:13:27.800019 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:13:37.800064 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:13:47.800165 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:13:57.799986 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:14:07.800072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:14:17.800100 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:14:27.800071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:14:37.800116 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:14:47.799980 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:14:57.800086 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:15:07.800046 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:15:17.800128 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:15:27.800041 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:15:37.800043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:15:47.800081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:15:57.800181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:16:07.800016 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:16:17.800068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:16:27.800169 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:16:37.800069 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:16:47.800010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:16:57.800061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:17:07.800252 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:17:17.800008 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-02-10 11:17:27.806739 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
11:17:36 up 4:00, 0 users, load average: 1.53, 1.89, 2.38
Linux old-k8s-version-705847 5.15.0-1075-aws #82~20.04.1-Ubuntu SMP Thu Dec 19 05:23:06 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [63daa6ac11e652bdc0f18023a918bf277f54ec083c247c421b488afcdb595870] <==
I0210 11:15:35.502507 1 main.go:301] handling current node
I0210 11:15:45.494656 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:15:45.494692 1 main.go:301] handling current node
I0210 11:15:55.494712 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:15:55.494749 1 main.go:301] handling current node
I0210 11:16:05.502144 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:16:05.502180 1 main.go:301] handling current node
I0210 11:16:15.503006 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:16:15.503043 1 main.go:301] handling current node
I0210 11:16:25.500881 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:16:25.500919 1 main.go:301] handling current node
I0210 11:16:35.501627 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:16:35.501662 1 main.go:301] handling current node
I0210 11:16:45.494746 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:16:45.494783 1 main.go:301] handling current node
I0210 11:16:55.501879 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:16:55.501915 1 main.go:301] handling current node
I0210 11:17:05.501620 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:17:05.501886 1 main.go:301] handling current node
I0210 11:17:15.502218 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:17:15.502347 1 main.go:301] handling current node
I0210 11:17:25.501596 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:17:25.501819 1 main.go:301] handling current node
I0210 11:17:35.509868 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:17:35.510072 1 main.go:301] handling current node
==> kindnet [9db35ce7df6ab45906886bea28fdd4f4702cf114ba71a471fce820bd75b505f4] <==
I0210 11:09:13.394168 1 controller.go:365] Waiting for informer caches to sync
I0210 11:09:13.394221 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I0210 11:09:13.594367 1 shared_informer.go:320] Caches are synced for kube-network-policies
I0210 11:09:13.594610 1 metrics.go:61] Registering metrics
I0210 11:09:13.594844 1 controller.go:401] Syncing nftables rules
I0210 11:09:23.401593 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:09:23.401651 1 main.go:301] handling current node
I0210 11:09:33.394640 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:09:33.394674 1 main.go:301] handling current node
I0210 11:09:43.403606 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:09:43.403639 1 main.go:301] handling current node
I0210 11:09:53.401626 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:09:53.401659 1 main.go:301] handling current node
I0210 11:10:03.393901 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:10:03.393938 1 main.go:301] handling current node
I0210 11:10:13.393906 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:10:13.393946 1 main.go:301] handling current node
I0210 11:10:23.399543 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:10:23.399575 1 main.go:301] handling current node
I0210 11:10:33.403631 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:10:33.403667 1 main.go:301] handling current node
I0210 11:10:43.401724 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:10:43.401759 1 main.go:301] handling current node
I0210 11:10:53.396258 1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
I0210 11:10:53.396398 1 main.go:301] handling current node
==> kube-apiserver [04c054919859612a7dd3b1388aaabeff6ce6117b5c57a348972e8b4260dd2d01] <==
I0210 11:08:51.804278 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0210 11:08:51.804311 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0210 11:08:51.841257 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0210 11:08:51.845901 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0210 11:08:51.845927 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0210 11:08:52.315954 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0210 11:08:52.368762 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0210 11:08:52.476672 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0210 11:08:52.477844 1 controller.go:606] quota admission added evaluator for: endpoints
I0210 11:08:52.483410 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0210 11:08:53.515524 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0210 11:08:54.147393 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0210 11:08:54.241030 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0210 11:09:02.590136 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0210 11:09:09.498700 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0210 11:09:09.676691 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0210 11:09:22.016589 1 client.go:360] parsed scheme: "passthrough"
I0210 11:09:22.016637 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:09:22.016668 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0210 11:10:01.784602 1 client.go:360] parsed scheme: "passthrough"
I0210 11:10:01.784664 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:10:01.784674 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0210 11:10:34.840672 1 client.go:360] parsed scheme: "passthrough"
I0210 11:10:34.840732 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:10:34.840743 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [ad6d38edf5bc8016a7b01c7edcd078318608407d82ba19b31b178a195b338ef1] <==
I0210 11:14:23.079183 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:14:23.079257 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0210 11:14:45.070745 1 handler_proxy.go:102] no RequestInfo found in the context
E0210 11:14:45.071102 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0210 11:14:45.071215 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0210 11:14:55.509715 1 client.go:360] parsed scheme: "passthrough"
I0210 11:14:55.509763 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:14:55.509796 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0210 11:15:28.022679 1 client.go:360] parsed scheme: "passthrough"
I0210 11:15:28.022724 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:15:28.022733 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0210 11:16:02.777037 1 client.go:360] parsed scheme: "passthrough"
I0210 11:16:02.777081 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:16:02.777089 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0210 11:16:39.513398 1 client.go:360] parsed scheme: "passthrough"
I0210 11:16:39.513444 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:16:39.513455 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0210 11:16:43.513630 1 handler_proxy.go:102] no RequestInfo found in the context
E0210 11:16:43.513709 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0210 11:16:43.513723 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0210 11:17:11.892638 1 client.go:360] parsed scheme: "passthrough"
I0210 11:17:11.892684 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0210 11:17:11.892694 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [aec35b105aa1dfdd9824acc9be165c74d8f25721b4e72d48900e3f2a9bc2eaaa] <==
E0210 11:13:32.869060 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0210 11:13:37.220910 1 request.go:655] Throttling request took 1.048450209s, request: GET:https://192.168.76.2:8443/apis/networking.k8s.io/v1beta1?timeout=32s
W0210 11:13:38.072352 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0210 11:14:03.402906 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0210 11:14:09.722886 1 request.go:655] Throttling request took 1.048243678s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W0210 11:14:10.574609 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0210 11:14:33.904928 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0210 11:14:42.178192 1 request.go:655] Throttling request took 1.001625485s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
W0210 11:14:43.076624 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0210 11:15:04.410472 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0210 11:15:14.726998 1 request.go:655] Throttling request took 1.048336791s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
W0210 11:15:15.578520 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0210 11:15:34.912309 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0210 11:15:47.180578 1 request.go:655] Throttling request took 1.00001746s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1?timeout=32s
W0210 11:15:48.080667 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0210 11:16:05.414166 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0210 11:16:19.731340 1 request.go:655] Throttling request took 1.04839124s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
W0210 11:16:20.582835 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0210 11:16:35.915969 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0210 11:16:52.233366 1 request.go:655] Throttling request took 1.048396649s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
W0210 11:16:53.084834 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0210 11:17:06.417855 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0210 11:17:24.735176 1 request.go:655] Throttling request took 1.048214063s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0210 11:17:25.588134 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0210 11:17:36.919730 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
==> kube-controller-manager [d49223327cb59f36de618d8970c835ef3007d8c0b14ac4e3908672491075782d] <==
I0210 11:09:09.614269 1 shared_informer.go:247] Caches are synced for taint
I0210 11:09:09.614363 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0210 11:09:09.614478 1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-705847. Assuming now as a timestamp.
I0210 11:09:09.614559 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0210 11:09:09.614844 1 shared_informer.go:247] Caches are synced for GC
I0210 11:09:09.615583 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0210 11:09:09.615779 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0210 11:09:09.616457 1 event.go:291] "Event occurred" object="old-k8s-version-705847" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-705847 event: Registered Node old-k8s-version-705847 in Controller"
I0210 11:09:09.647292 1 shared_informer.go:247] Caches are synced for daemon sets
I0210 11:09:09.710786 1 shared_informer.go:247] Caches are synced for resource quota
I0210 11:09:09.716333 1 shared_informer.go:247] Caches are synced for resource quota
I0210 11:09:09.722530 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-l58wz"
I0210 11:09:09.728183 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qt8rk"
E0210 11:09:09.825040 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"a0093706-ef42-441b-9f97-68ae7e28fb5f", ResourceVersion:"262", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63874782534, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400138c7a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400138c7c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x400138c7e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001299f80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c
800), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c820), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400138c860)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400061a540), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d6aef8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004c20e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400095cdc8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d6af48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
E0210 11:09:09.840321 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3d4bd9ea-a394-478f-8f2d-6ff82b5400eb", ResourceVersion:"276", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63874782534, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241212-9f82dd49\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400138c8c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400138c8e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400138c900), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c920), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c940), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400138c960), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241212-9f82dd49", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400138c980)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400138c9c0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400061ac00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d6b168), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004c2150), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400095cdd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d6b1b0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I0210 11:09:09.882983 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E0210 11:09:09.914498 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3d4bd9ea-a394-478f-8f2d-6ff82b5400eb", ResourceVersion:"416", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63874782534, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241212-9f82dd49\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d9bd00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d9bd20)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001d9bd40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001d9bd60)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001d9bd80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d9bda0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d9bdc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001d9bde0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241212-9f82dd49", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d9be00)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001d9be40)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001d99320), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001dc8288), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004615e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400072b7f0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001dc82d0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
I0210 11:09:10.160604 1 shared_informer.go:247] Caches are synced for garbage collector
I0210 11:09:10.160626 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0210 11:09:10.184330 1 shared_informer.go:247] Caches are synced for garbage collector
I0210 11:09:11.224629 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0210 11:09:11.247535 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-bbmfl"
I0210 11:09:14.614820 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0210 11:11:02.247284 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0210 11:11:02.457821 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [2517ca7acc440266e73d02a000e1050852ff6f588aa67fd380e9850b18012708] <==
I0210 11:11:46.563285 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0210 11:11:46.563364 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0210 11:11:46.597454 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0210 11:11:46.597755 1 server_others.go:185] Using iptables Proxier.
I0210 11:11:46.598118 1 server.go:650] Version: v1.20.0
I0210 11:11:46.598814 1 config.go:315] Starting service config controller
I0210 11:11:46.598920 1 shared_informer.go:240] Waiting for caches to sync for service config
I0210 11:11:46.599047 1 config.go:224] Starting endpoint slice config controller
I0210 11:11:46.602802 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0210 11:11:46.702921 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0210 11:11:46.702953 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [6d39bdbc1d81bb76feaa734f9ece5602070c27ef46b571816c2aeaa7edd54ec1] <==
I0210 11:09:10.707694 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0210 11:09:10.707793 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0210 11:09:10.755738 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0210 11:09:10.755829 1 server_others.go:185] Using iptables Proxier.
I0210 11:09:10.756028 1 server.go:650] Version: v1.20.0
I0210 11:09:10.756521 1 config.go:315] Starting service config controller
I0210 11:09:10.756533 1 shared_informer.go:240] Waiting for caches to sync for service config
I0210 11:09:10.759080 1 config.go:224] Starting endpoint slice config controller
I0210 11:09:10.759092 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0210 11:09:10.856774 1 shared_informer.go:247] Caches are synced for service config
I0210 11:09:10.859791 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [2ce24aaa2eea1a4135d752dda97f292f64a892cc9c43814a990d263ba48b42ff] <==
I0210 11:11:37.649387 1 serving.go:331] Generated self-signed cert in-memory
W0210 11:11:42.296414 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0210 11:11:42.299527 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0210 11:11:42.299711 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0210 11:11:42.299772 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0210 11:11:42.492809 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0210 11:11:42.495893 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0210 11:11:42.495910 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0210 11:11:42.495925 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0210 11:11:42.597631 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [8d3d8d966ae3770d86b7acee75ea4ffa51b71d8c8e157eb416868772851268fd] <==
I0210 11:08:48.714590 1 serving.go:331] Generated self-signed cert in-memory
W0210 11:08:51.082798 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0210 11:08:51.083028 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0210 11:08:51.083170 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0210 11:08:51.083252 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0210 11:08:51.138868 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0210 11:08:51.141706 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0210 11:08:51.141734 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0210 11:08:51.141755 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0210 11:08:51.166442 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0210 11:08:51.167587 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0210 11:08:51.173961 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0210 11:08:51.174479 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0210 11:08:51.174638 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0210 11:08:51.176219 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0210 11:08:51.176368 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0210 11:08:51.177681 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0210 11:08:51.185758 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0210 11:08:51.189797 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0210 11:08:51.193329 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0210 11:08:51.203326 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0210 11:08:52.026223 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0210 11:08:52.057637 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
I0210 11:08:54.841867 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Feb 10 11:15:50 old-k8s-version-705847 kubelet[665]: E0210 11:15:50.166193 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: I0210 11:15:51.165086 665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
Feb 10 11:15:51 old-k8s-version-705847 kubelet[665]: E0210 11:15:51.165434 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:16:01 old-k8s-version-705847 kubelet[665]: E0210 11:16:01.166143 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: I0210 11:16:03.165051 665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
Feb 10 11:16:03 old-k8s-version-705847 kubelet[665]: E0210 11:16:03.165397 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:16:13 old-k8s-version-705847 kubelet[665]: E0210 11:16:13.165902 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: I0210 11:16:14.165134 665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
Feb 10 11:16:14 old-k8s-version-705847 kubelet[665]: E0210 11:16:14.165502 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:16:25 old-k8s-version-705847 kubelet[665]: E0210 11:16:25.165821 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: I0210 11:16:28.165239 665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
Feb 10 11:16:28 old-k8s-version-705847 kubelet[665]: E0210 11:16:28.166303 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:16:39 old-k8s-version-705847 kubelet[665]: E0210 11:16:39.165848 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: I0210 11:16:43.165081 665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
Feb 10 11:16:43 old-k8s-version-705847 kubelet[665]: E0210 11:16:43.165947 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:16:50 old-k8s-version-705847 kubelet[665]: E0210 11:16:50.166097 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: I0210 11:16:56.165060 665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
Feb 10 11:16:56 old-k8s-version-705847 kubelet[665]: E0210 11:16:56.165902 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:17:01 old-k8s-version-705847 kubelet[665]: E0210 11:17:01.166316 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: I0210 11:17:09.165064 665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
Feb 10 11:17:09 old-k8s-version-705847 kubelet[665]: E0210 11:17:09.165973 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:17:14 old-k8s-version-705847 kubelet[665]: E0210 11:17:14.167326 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Feb 10 11:17:24 old-k8s-version-705847 kubelet[665]: I0210 11:17:24.167189 665 scope.go:95] [topologymanager] RemoveContainer - Container ID: b6b6d099aaf4cee0ef937c867b5d658d32acfa58e8c1105f29148cc4641e0dd6
Feb 10 11:17:24 old-k8s-version-705847 kubelet[665]: E0210 11:17:24.168374 665 pod_workers.go:191] Error syncing pod cf61dd38-4f85-4c99-a1de-fd60e3f09061 ("dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-r58kw_kubernetes-dashboard(cf61dd38-4f85-4c99-a1de-fd60e3f09061)"
Feb 10 11:17:28 old-k8s-version-705847 kubelet[665]: E0210 11:17:28.165971 665 pod_workers.go:191] Error syncing pod a844b987-1a9d-4a0b-b63d-4cdfc43faeb4 ("metrics-server-9975d5f86-nvn7z_kube-system(a844b987-1a9d-4a0b-b63d-4cdfc43faeb4)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
==> kubernetes-dashboard [6c8852ecb1c210dd825ea4e9920b735229937e13fcbc4e19dbd08d4f8b07fab7] <==
2025/02/10 11:12:08 Using namespace: kubernetes-dashboard
2025/02/10 11:12:08 Using in-cluster config to connect to apiserver
2025/02/10 11:12:08 Using secret token for csrf signing
2025/02/10 11:12:08 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/02/10 11:12:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/02/10 11:12:08 Successful initial request to the apiserver, version: v1.20.0
2025/02/10 11:12:08 Generating JWE encryption key
2025/02/10 11:12:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/02/10 11:12:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/02/10 11:12:10 Initializing JWE encryption key from synchronized object
2025/02/10 11:12:10 Creating in-cluster Sidecar client
2025/02/10 11:12:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:12:10 Serving insecurely on HTTP port: 9090
2025/02/10 11:12:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:13:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:13:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:14:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:14:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:15:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:15:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:16:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:16:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:17:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/02/10 11:12:08 Starting overwatch
==> storage-provisioner [221dcab82eb8dd1aca0b27729220cb3fe58a3d07f3ff25a227e48e95e0d00525] <==
I0210 11:11:45.386345 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0210 11:12:15.390110 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
==> storage-provisioner [b7ef8424fcbcb62df5eac6b61f9688f3fb6bf1751069a2ab9298cde977a75c84] <==
I0210 11:12:28.309062 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0210 11:12:28.333562 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0210 11:12:28.333623 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0210 11:12:45.867628 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0210 11:12:45.868027 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-705847_a73b877e-2d56-419b-9f14-0d434040a716!
I0210 11:12:45.869712 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4dfaaa55-08a0-4e59-9db4-d4e5746b7f58", APIVersion:"v1", ResourceVersion:"851", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-705847_a73b877e-2d56-419b-9f14-0d434040a716 became leader
I0210 11:12:45.969079 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-705847_a73b877e-2d56-419b-9f14-0d434040a716!
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-705847 -n old-k8s-version-705847
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-705847 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-nvn7z
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-705847 describe pod metrics-server-9975d5f86-nvn7z
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-705847 describe pod metrics-server-9975d5f86-nvn7z: exit status 1 (187.823649ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-nvn7z" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-705847 describe pod metrics-server-9975d5f86-nvn7z: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (383.04s)