=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-169187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-169187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0: exit status 102 (6m14.061117762s)
-- stdout --
* [old-k8s-version-169187] minikube v1.35.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=20598
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-169187" primary control-plane node in "old-k8s-version-169187" cluster
* Pulling base image v0.0.46-1743675393-20591 ...
* Restarting existing docker container for "old-k8s-version-169187" ...
* Preparing Kubernetes v1.20.0 on Docker 28.0.4 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-169187 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
-- /stdout --
** stderr **
I0407 13:45:37.341891 1819972 out.go:345] Setting OutFile to fd 1 ...
I0407 13:45:37.342134 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:45:37.342202 1819972 out.go:358] Setting ErrFile to fd 2...
I0407 13:45:37.342222 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:45:37.342525 1819972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
I0407 13:45:37.342955 1819972 out.go:352] Setting JSON to false
I0407 13:45:37.344071 1819972 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26886,"bootTime":1744006652,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0407 13:45:37.344158 1819972 start.go:139] virtualization:
I0407 13:45:37.347529 1819972 out.go:177] * [old-k8s-version-169187] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0407 13:45:37.351311 1819972 out.go:177] - MINIKUBE_LOCATION=20598
I0407 13:45:37.351366 1819972 notify.go:220] Checking for updates...
I0407 13:45:37.357340 1819972 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0407 13:45:37.360277 1819972 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
I0407 13:45:37.363074 1819972 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
I0407 13:45:37.365875 1819972 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0407 13:45:37.368731 1819972 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0407 13:45:37.372121 1819972 config.go:182] Loaded profile config "old-k8s-version-169187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0407 13:45:37.375464 1819972 out.go:177] * Kubernetes 1.32.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.32.2
I0407 13:45:37.378238 1819972 driver.go:394] Setting default libvirt URI to qemu:///system
I0407 13:45:37.435726 1819972 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0407 13:45:37.435855 1819972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 13:45:37.543723 1819972 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:45:37.528636748 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0407 13:45:37.543840 1819972 docker.go:318] overlay module found
I0407 13:45:37.546924 1819972 out.go:177] * Using the docker driver based on existing profile
I0407 13:45:37.549702 1819972 start.go:297] selected driver: docker
I0407 13:45:37.549731 1819972 start.go:901] validating driver "docker" against &{Name:old-k8s-version-169187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountSt
ring:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:45:37.549838 1819972 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0407 13:45:37.550521 1819972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 13:45:37.660991 1819972 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2025-04-07 13:45:37.650479302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0407 13:45:37.661332 1819972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0407 13:45:37.661370 1819972 cni.go:84] Creating CNI manager for ""
I0407 13:45:37.661430 1819972 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0407 13:45:37.661472 1819972 start.go:340] cluster config:
{Name:old-k8s-version-169187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:45:37.664743 1819972 out.go:177] * Starting "old-k8s-version-169187" primary control-plane node in "old-k8s-version-169187" cluster
I0407 13:45:37.667576 1819972 cache.go:121] Beginning downloading kic base image for docker with docker
I0407 13:45:37.670620 1819972 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
I0407 13:45:37.673342 1819972 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0407 13:45:37.673406 1819972 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
I0407 13:45:37.673422 1819972 cache.go:56] Caching tarball of preloaded images
I0407 13:45:37.673511 1819972 preload.go:172] Found /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0407 13:45:37.673527 1819972 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
I0407 13:45:37.673645 1819972 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/config.json ...
I0407 13:45:37.673868 1819972 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
I0407 13:45:37.706656 1819972 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
I0407 13:45:37.706683 1819972 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
I0407 13:45:37.706697 1819972 cache.go:230] Successfully downloaded all kic artifacts
I0407 13:45:37.706720 1819972 start.go:360] acquireMachinesLock for old-k8s-version-169187: {Name:mkeb44ab1d4b31711db3c3abb0770c2a53c1d6ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:45:37.706779 1819972 start.go:364] duration metric: took 36.71µs to acquireMachinesLock for "old-k8s-version-169187"
I0407 13:45:37.706808 1819972 start.go:96] Skipping create...Using existing machine configuration
I0407 13:45:37.706814 1819972 fix.go:54] fixHost starting:
I0407 13:45:37.707101 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
I0407 13:45:37.745699 1819972 fix.go:112] recreateIfNeeded on old-k8s-version-169187: state=Stopped err=<nil>
W0407 13:45:37.745740 1819972 fix.go:138] unexpected machine state, will restart: <nil>
I0407 13:45:37.748821 1819972 out.go:177] * Restarting existing docker container for "old-k8s-version-169187" ...
I0407 13:45:37.751619 1819972 cli_runner.go:164] Run: docker start old-k8s-version-169187
I0407 13:45:38.192838 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
I0407 13:45:38.223808 1819972 kic.go:430] container "old-k8s-version-169187" state is running.
I0407 13:45:38.224232 1819972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169187
I0407 13:45:38.256849 1819972 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/config.json ...
I0407 13:45:38.257096 1819972 machine.go:93] provisionDockerMachine start ...
I0407 13:45:38.257166 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:38.278574 1819972 main.go:141] libmachine: Using SSH client type: native
I0407 13:45:38.278917 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34611 <nil> <nil>}
I0407 13:45:38.278927 1819972 main.go:141] libmachine: About to run SSH command:
hostname
I0407 13:45:38.279615 1819972 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
I0407 13:45:41.403824 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169187
I0407 13:45:41.403855 1819972 ubuntu.go:169] provisioning hostname "old-k8s-version-169187"
I0407 13:45:41.403919 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:41.421097 1819972 main.go:141] libmachine: Using SSH client type: native
I0407 13:45:41.421407 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34611 <nil> <nil>}
I0407 13:45:41.421422 1819972 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-169187 && echo "old-k8s-version-169187" | sudo tee /etc/hostname
I0407 13:45:41.564008 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-169187
I0407 13:45:41.564189 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:41.584254 1819972 main.go:141] libmachine: Using SSH client type: native
I0407 13:45:41.584568 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34611 <nil> <nil>}
I0407 13:45:41.584586 1819972 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-169187' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-169187/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-169187' | sudo tee -a /etc/hosts;
fi
fi
I0407 13:45:41.727962 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0407 13:45:41.728037 1819972 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20598-1489638/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-1489638/.minikube}
I0407 13:45:41.728078 1819972 ubuntu.go:177] setting up certificates
I0407 13:45:41.728114 1819972 provision.go:84] configureAuth start
I0407 13:45:41.728205 1819972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169187
I0407 13:45:41.756817 1819972 provision.go:143] copyHostCerts
I0407 13:45:41.756882 1819972 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem, removing ...
I0407 13:45:41.756898 1819972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem
I0407 13:45:41.756979 1819972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem (1082 bytes)
I0407 13:45:41.757069 1819972 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem, removing ...
I0407 13:45:41.757074 1819972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem
I0407 13:45:41.757099 1819972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem (1123 bytes)
I0407 13:45:41.757146 1819972 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem, removing ...
I0407 13:45:41.757150 1819972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem
I0407 13:45:41.757172 1819972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem (1675 bytes)
I0407 13:45:41.757214 1819972 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-169187 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-169187]
I0407 13:45:42.201410 1819972 provision.go:177] copyRemoteCerts
I0407 13:45:42.201683 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0407 13:45:42.201764 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:42.229784 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:42.334997 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0407 13:45:42.367752 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0407 13:45:42.395789 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0407 13:45:42.423690 1819972 provision.go:87] duration metric: took 695.547232ms to configureAuth
I0407 13:45:42.423759 1819972 ubuntu.go:193] setting minikube options for container-runtime
I0407 13:45:42.423994 1819972 config.go:182] Loaded profile config "old-k8s-version-169187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0407 13:45:42.424088 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:42.445342 1819972 main.go:141] libmachine: Using SSH client type: native
I0407 13:45:42.445662 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34611 <nil> <nil>}
I0407 13:45:42.445674 1819972 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0407 13:45:42.572291 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0407 13:45:42.572316 1819972 ubuntu.go:71] root file system type: overlay
I0407 13:45:42.572423 1819972 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0407 13:45:42.572496 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:42.595973 1819972 main.go:141] libmachine: Using SSH client type: native
I0407 13:45:42.596283 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34611 <nil> <nil>}
I0407 13:45:42.596372 1819972 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0407 13:45:42.743154 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0407 13:45:42.743260 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:42.769069 1819972 main.go:141] libmachine: Using SSH client type: native
I0407 13:45:42.769383 1819972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34611 <nil> <nil>}
I0407 13:45:42.769400 1819972 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0407 13:45:42.914314 1819972 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0407 13:45:42.914356 1819972 machine.go:96] duration metric: took 4.657241722s to provisionDockerMachine
I0407 13:45:42.914369 1819972 start.go:293] postStartSetup for "old-k8s-version-169187" (driver="docker")
I0407 13:45:42.914380 1819972 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0407 13:45:42.914458 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0407 13:45:42.914518 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:42.945894 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:43.045241 1819972 ssh_runner.go:195] Run: cat /etc/os-release
I0407 13:45:43.049083 1819972 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0407 13:45:43.049189 1819972 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0407 13:45:43.049257 1819972 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0407 13:45:43.049284 1819972 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0407 13:45:43.049307 1819972 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1489638/.minikube/addons for local assets ...
I0407 13:45:43.049393 1819972 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1489638/.minikube/files for local assets ...
I0407 13:45:43.049529 1819972 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem -> 14950262.pem in /etc/ssl/certs
I0407 13:45:43.049774 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0407 13:45:43.060722 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem --> /etc/ssl/certs/14950262.pem (1708 bytes)
I0407 13:45:43.090976 1819972 start.go:296] duration metric: took 176.590145ms for postStartSetup
I0407 13:45:43.091062 1819972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0407 13:45:43.091108 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:43.110246 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:43.199348 1819972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0407 13:45:43.204977 1819972 fix.go:56] duration metric: took 5.49815662s for fixHost
I0407 13:45:43.205004 1819972 start.go:83] releasing machines lock for "old-k8s-version-169187", held for 5.498207172s
I0407 13:45:43.205075 1819972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-169187
I0407 13:45:43.223176 1819972 ssh_runner.go:195] Run: cat /version.json
I0407 13:45:43.223235 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:43.223478 1819972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0407 13:45:43.223554 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:43.275021 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:43.276078 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:43.516700 1819972 ssh_runner.go:195] Run: systemctl --version
I0407 13:45:43.522855 1819972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0407 13:45:43.529739 1819972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0407 13:45:43.562961 1819972 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0407 13:45:43.563102 1819972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
I0407 13:45:43.591807 1819972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
I0407 13:45:43.623565 1819972 cni.go:308] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0407 13:45:43.623692 1819972 start.go:495] detecting cgroup driver to use...
I0407 13:45:43.623763 1819972 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 13:45:43.623961 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 13:45:43.651127 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0407 13:45:43.663772 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0407 13:45:43.682240 1819972 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0407 13:45:43.682397 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0407 13:45:43.694433 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 13:45:43.716820 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0407 13:45:43.729350 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 13:45:43.745991 1819972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0407 13:45:43.761204 1819972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0407 13:45:43.777047 1819972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0407 13:45:43.788562 1819972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0407 13:45:43.802464 1819972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:45:43.927726 1819972 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0407 13:45:44.071278 1819972 start.go:495] detecting cgroup driver to use...
I0407 13:45:44.071414 1819972 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 13:45:44.071567 1819972 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0407 13:45:44.100118 1819972 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0407 13:45:44.100239 1819972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0407 13:45:44.117455 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0407 13:45:44.144889 1819972 ssh_runner.go:195] Run: which cri-dockerd
I0407 13:45:44.150734 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0407 13:45:44.162526 1819972 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0407 13:45:44.185684 1819972 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0407 13:45:44.348770 1819972 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0407 13:45:44.500284 1819972 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0407 13:45:44.500377 1819972 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0407 13:45:44.530688 1819972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:45:44.684157 1819972 ssh_runner.go:195] Run: sudo systemctl restart docker
I0407 13:45:45.490323 1819972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0407 13:45:45.514741 1819972 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0407 13:45:45.556276 1819972 out.go:235] * Preparing Kubernetes v1.20.0 on Docker 28.0.4 ...
I0407 13:45:45.556432 1819972 cli_runner.go:164] Run: docker network inspect old-k8s-version-169187 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 13:45:45.581423 1819972 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0407 13:45:45.585657 1819972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 13:45:45.596771 1819972 kubeadm.go:883] updating cluster {Name:old-k8s-version-169187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0407 13:45:45.596881 1819972 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0407 13:45:45.596948 1819972 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0407 13:45:45.621465 1819972 docker.go:689] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.0
registry.k8s.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
registry.k8s.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
registry.k8s.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
registry.k8s.io/kube-scheduler:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
registry.k8s.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
registry.k8s.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
registry.k8s.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0407 13:45:45.621485 1819972 docker.go:619] Images already preloaded, skipping extraction
I0407 13:45:45.621544 1819972 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0407 13:45:45.648322 1819972 docker.go:689] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
registry.k8s.io/kube-proxy:v1.20.0
k8s.gcr.io/kube-proxy:v1.20.0
registry.k8s.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-apiserver:v1.20.0
k8s.gcr.io/kube-controller-manager:v1.20.0
registry.k8s.io/kube-controller-manager:v1.20.0
k8s.gcr.io/kube-scheduler:v1.20.0
registry.k8s.io/kube-scheduler:v1.20.0
k8s.gcr.io/etcd:3.4.13-0
registry.k8s.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
registry.k8s.io/coredns:1.7.0
k8s.gcr.io/pause:3.2
registry.k8s.io/pause:3.2
gcr.io/k8s-minikube/busybox:1.28.4-glibc
-- /stdout --
I0407 13:45:45.648394 1819972 cache_images.go:84] Images are preloaded, skipping loading
I0407 13:45:45.648418 1819972 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 docker true true} ...
I0407 13:45:45.648540 1819972 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-169187 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0407 13:45:45.648630 1819972 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0407 13:45:45.703092 1819972 cni.go:84] Creating CNI manager for ""
I0407 13:45:45.703116 1819972 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
I0407 13:45:45.703125 1819972 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0407 13:45:45.703143 1819972 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-169187 NodeName:old-k8s-version-169187 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0407 13:45:45.703274 1819972 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: "old-k8s-version-169187"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0407 13:45:45.703337 1819972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0407 13:45:45.712098 1819972 binaries.go:44] Found k8s binaries, skipping transfer
I0407 13:45:45.712209 1819972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0407 13:45:45.723419 1819972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
I0407 13:45:45.742689 1819972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0407 13:45:45.760834 1819972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
I0407 13:45:45.778556 1819972 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0407 13:45:45.781915 1819972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 13:45:45.793442 1819972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:45:45.892660 1819972 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 13:45:45.907271 1819972 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187 for IP: 192.168.76.2
I0407 13:45:45.907288 1819972 certs.go:194] generating shared ca certs ...
I0407 13:45:45.907304 1819972 certs.go:226] acquiring lock for ca certs: {Name:mk03ca927c02de3344f72431a7d9f1cc9d84ee54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:45:45.907437 1819972 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.key
I0407 13:45:45.907475 1819972 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/proxy-client-ca.key
I0407 13:45:45.907482 1819972 certs.go:256] generating profile certs ...
I0407 13:45:45.907578 1819972 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/client.key
I0407 13:45:45.907643 1819972 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/apiserver.key.b87325ea
I0407 13:45:45.907683 1819972 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/proxy-client.key
I0407 13:45:45.907793 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/1495026.pem (1338 bytes)
W0407 13:45:45.907819 1819972 certs.go:480] ignoring /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/1495026_empty.pem, impossibly tiny 0 bytes
I0407 13:45:45.907827 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem (1679 bytes)
I0407 13:45:45.907851 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem (1082 bytes)
I0407 13:45:45.907873 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem (1123 bytes)
I0407 13:45:45.907893 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem (1675 bytes)
I0407 13:45:45.907932 1819972 certs.go:484] found cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem (1708 bytes)
I0407 13:45:45.908498 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0407 13:45:45.940610 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0407 13:45:45.967616 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0407 13:45:46.010354 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0407 13:45:46.058600 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0407 13:45:46.097263 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0407 13:45:46.146155 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0407 13:45:46.190071 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/old-k8s-version-169187/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0407 13:45:46.220895 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem --> /usr/share/ca-certificates/14950262.pem (1708 bytes)
I0407 13:45:46.259034 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0407 13:45:46.285109 1819972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/1495026.pem --> /usr/share/ca-certificates/1495026.pem (1338 bytes)
I0407 13:45:46.311601 1819972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0407 13:45:46.330743 1819972 ssh_runner.go:195] Run: openssl version
I0407 13:45:46.337946 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14950262.pem && ln -fs /usr/share/ca-certificates/14950262.pem /etc/ssl/certs/14950262.pem"
I0407 13:45:46.347347 1819972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14950262.pem
I0407 13:45:46.350728 1819972 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 7 12:57 /usr/share/ca-certificates/14950262.pem
I0407 13:45:46.350810 1819972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14950262.pem
I0407 13:45:46.357631 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14950262.pem /etc/ssl/certs/3ec20f2e.0"
I0407 13:45:46.366607 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0407 13:45:46.376047 1819972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0407 13:45:46.379342 1819972 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 7 12:50 /usr/share/ca-certificates/minikubeCA.pem
I0407 13:45:46.379409 1819972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0407 13:45:46.387122 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0407 13:45:46.396842 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1495026.pem && ln -fs /usr/share/ca-certificates/1495026.pem /etc/ssl/certs/1495026.pem"
I0407 13:45:46.406421 1819972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1495026.pem
I0407 13:45:46.409782 1819972 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 7 12:57 /usr/share/ca-certificates/1495026.pem
I0407 13:45:46.409875 1819972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1495026.pem
I0407 13:45:46.416761 1819972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1495026.pem /etc/ssl/certs/51391683.0"
I0407 13:45:46.425853 1819972 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0407 13:45:46.429255 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0407 13:45:46.436825 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0407 13:45:46.443732 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0407 13:45:46.450652 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0407 13:45:46.457612 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0407 13:45:46.464594 1819972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0407 13:45:46.471412 1819972 kubeadm.go:392] StartCluster: {Name:old-k8s-version-169187 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-169187 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/
minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:45:46.471648 1819972 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0407 13:45:46.491771 1819972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0407 13:45:46.500534 1819972 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0407 13:45:46.500562 1819972 kubeadm.go:593] restartPrimaryControlPlane start ...
I0407 13:45:46.500633 1819972 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0407 13:45:46.509213 1819972 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0407 13:45:46.509691 1819972 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-169187" does not appear in /home/jenkins/minikube-integration/20598-1489638/kubeconfig
I0407 13:45:46.509875 1819972 kubeconfig.go:62] /home/jenkins/minikube-integration/20598-1489638/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-169187" cluster setting kubeconfig missing "old-k8s-version-169187" context setting]
I0407 13:45:46.510186 1819972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1489638/kubeconfig: {Name:mk35d977c3a2e102445ffcc403aa71fe5bdeafe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:45:46.511455 1819972 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0407 13:45:46.520633 1819972 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0407 13:45:46.520665 1819972 kubeadm.go:597] duration metric: took 20.097076ms to restartPrimaryControlPlane
I0407 13:45:46.520675 1819972 kubeadm.go:394] duration metric: took 49.270487ms to StartCluster
I0407 13:45:46.520715 1819972 settings.go:142] acquiring lock: {Name:mk7d059a74c0e18dafa1f05777e364166f9e2e1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:45:46.520789 1819972 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/20598-1489638/kubeconfig
I0407 13:45:46.521359 1819972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1489638/kubeconfig: {Name:mk35d977c3a2e102445ffcc403aa71fe5bdeafe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:45:46.521552 1819972 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0407 13:45:46.521867 1819972 config.go:182] Loaded profile config "old-k8s-version-169187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0407 13:45:46.521910 1819972 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0407 13:45:46.521980 1819972 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-169187"
I0407 13:45:46.521993 1819972 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-169187"
W0407 13:45:46.522004 1819972 addons.go:247] addon storage-provisioner should already be in state true
I0407 13:45:46.522024 1819972 host.go:66] Checking if "old-k8s-version-169187" exists ...
I0407 13:45:46.522186 1819972 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-169187"
I0407 13:45:46.522221 1819972 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-169187"
I0407 13:45:46.522516 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
I0407 13:45:46.522602 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
I0407 13:45:46.523222 1819972 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-169187"
I0407 13:45:46.523245 1819972 addons.go:238] Setting addon metrics-server=true in "old-k8s-version-169187"
W0407 13:45:46.523253 1819972 addons.go:247] addon metrics-server should already be in state true
I0407 13:45:46.523284 1819972 host.go:66] Checking if "old-k8s-version-169187" exists ...
I0407 13:45:46.523768 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
I0407 13:45:46.526156 1819972 addons.go:69] Setting dashboard=true in profile "old-k8s-version-169187"
I0407 13:45:46.526246 1819972 addons.go:238] Setting addon dashboard=true in "old-k8s-version-169187"
W0407 13:45:46.526606 1819972 addons.go:247] addon dashboard should already be in state true
I0407 13:45:46.526679 1819972 host.go:66] Checking if "old-k8s-version-169187" exists ...
I0407 13:45:46.526594 1819972 out.go:177] * Verifying Kubernetes components...
I0407 13:45:46.533287 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
I0407 13:45:46.534331 1819972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:45:46.573339 1819972 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-169187"
W0407 13:45:46.573369 1819972 addons.go:247] addon default-storageclass should already be in state true
I0407 13:45:46.573397 1819972 host.go:66] Checking if "old-k8s-version-169187" exists ...
I0407 13:45:46.573822 1819972 cli_runner.go:164] Run: docker container inspect old-k8s-version-169187 --format={{.State.Status}}
I0407 13:45:46.606774 1819972 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0407 13:45:46.610200 1819972 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:45:46.610226 1819972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0407 13:45:46.610303 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:46.617518 1819972 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0407 13:45:46.620134 1819972 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0407 13:45:46.620158 1819972 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0407 13:45:46.620225 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:46.620355 1819972 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0407 13:45:46.623180 1819972 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0407 13:45:46.625940 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0407 13:45:46.625963 1819972 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0407 13:45:46.626032 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:46.649007 1819972 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
I0407 13:45:46.649036 1819972 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0407 13:45:46.649099 1819972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-169187
I0407 13:45:46.684331 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:46.692435 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:46.705140 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:46.719665 1819972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34611 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/old-k8s-version-169187/id_rsa Username:docker}
I0407 13:45:46.763170 1819972 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 13:45:46.805887 1819972 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-169187" to be "Ready" ...
I0407 13:45:46.878898 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0407 13:45:46.878970 1819972 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0407 13:45:46.901522 1819972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0407 13:45:46.901547 1819972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0407 13:45:46.909493 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:45:46.922608 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0407 13:45:46.922634 1819972 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0407 13:45:46.944673 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:45:46.950659 1819972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0407 13:45:46.950683 1819972 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0407 13:45:46.974368 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0407 13:45:46.974393 1819972 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0407 13:45:47.001119 1819972 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:45:47.001145 1819972 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0407 13:45:47.026054 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0407 13:45:47.026090 1819972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0407 13:45:47.070689 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:45:47.139408 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0407 13:45:47.139433 1819972 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
W0407 13:45:47.147296 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.147334 1819972 retry.go:31] will retry after 137.738646ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.176660 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0407 13:45:47.176685 1819972 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0407 13:45:47.195973 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0407 13:45:47.196004 1819972 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0407 13:45:47.214949 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0407 13:45:47.214974 1819972 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0407 13:45:47.251175 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.251207 1819972 retry.go:31] will retry after 322.914186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.274102 1819972 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:45:47.274127 1819972 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0407 13:45:47.285443 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:45:47.340477 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0407 13:45:47.378347 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.378379 1819972 retry.go:31] will retry after 203.482972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:47.519573 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.519612 1819972 retry.go:31] will retry after 498.955382ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:47.560042 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.560133 1819972 retry.go:31] will retry after 359.490181ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.575363 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:45:47.582867 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:45:47.712275 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.712374 1819972 retry.go:31] will retry after 426.606451ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:47.752406 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.752440 1819972 retry.go:31] will retry after 530.335448ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:47.920183 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0407 13:45:48.010648 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.010686 1819972 retry.go:31] will retry after 339.685278ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.018818 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:45:48.139416 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0407 13:45:48.168216 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.168284 1819972 retry.go:31] will retry after 734.85388ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.283571 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:45:48.346381 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.346436 1819972 retry.go:31] will retry after 735.365017ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.350553 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0407 13:45:48.526652 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.526681 1819972 retry.go:31] will retry after 435.394566ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:48.537390 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.537417 1819972 retry.go:31] will retry after 308.772517ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:48.807238 1819972 node_ready.go:53] error getting node "old-k8s-version-169187": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-169187": dial tcp 192.168.76.2:8443: connect: connection refused
I0407 13:45:48.846571 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:45:48.904028 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:45:48.962901 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:45:49.036851 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:49.036887 1819972 retry.go:31] will retry after 606.867748ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:49.082154 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0407 13:45:49.183214 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:49.183241 1819972 retry.go:31] will retry after 1.218106895s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:49.216787 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:49.216817 1819972 retry.go:31] will retry after 558.290441ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:49.266845 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:49.266880 1819972 retry.go:31] will retry after 1.022558809s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:49.644688 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:45:49.775338 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:45:49.847411 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:49.847590 1819972 retry.go:31] will retry after 1.616020397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:49.915468 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:49.915528 1819972 retry.go:31] will retry after 642.299972ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:50.289874 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:45:50.401792 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:45:50.558203 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0407 13:45:50.704273 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:50.704313 1819972 retry.go:31] will retry after 809.469556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:50.721883 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:50.721918 1819972 retry.go:31] will retry after 960.830005ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0407 13:45:50.938639 1819972 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:50.938667 1819972 retry.go:31] will retry after 1.060610394s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0407 13:45:51.464031 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0407 13:45:51.514512 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0407 13:45:51.683802 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0407 13:45:52.000121 1819972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0407 13:45:59.950854 1819972 node_ready.go:49] node "old-k8s-version-169187" has status "Ready":"True"
I0407 13:45:59.950884 1819972 node_ready.go:38] duration metric: took 13.14496306s for node "old-k8s-version-169187" to be "Ready" ...
I0407 13:45:59.950896 1819972 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 13:46:00.318686 1819972 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-zpflr" in "kube-system" namespace to be "Ready" ...
I0407 13:46:00.437587 1819972 pod_ready.go:93] pod "coredns-74ff55c5b-zpflr" in "kube-system" namespace has status "Ready":"True"
I0407 13:46:00.437664 1819972 pod_ready.go:82] duration metric: took 118.944951ms for pod "coredns-74ff55c5b-zpflr" in "kube-system" namespace to be "Ready" ...
I0407 13:46:00.437691 1819972 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
I0407 13:46:00.474570 1819972 pod_ready.go:93] pod "etcd-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"True"
I0407 13:46:00.474645 1819972 pod_ready.go:82] duration metric: took 36.908343ms for pod "etcd-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
I0407 13:46:00.474680 1819972 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
I0407 13:46:01.479854 1819972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.01575852s)
I0407 13:46:01.480073 1819972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.965533446s)
I0407 13:46:01.480341 1819972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.796513822s)
I0407 13:46:01.480420 1819972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.480274795s)
I0407 13:46:01.480431 1819972 addons.go:479] Verifying addon metrics-server=true in "old-k8s-version-169187"
I0407 13:46:01.483201 1819972 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-169187 addons enable metrics-server
I0407 13:46:01.488587 1819972 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I0407 13:46:01.491749 1819972 addons.go:514] duration metric: took 14.969835351s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I0407 13:46:02.479846 1819972 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:04.979140 1819972 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:06.980686 1819972 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:09.480337 1819972 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"True"
I0407 13:46:09.480358 1819972 pod_ready.go:82] duration metric: took 9.00565624s for pod "kube-apiserver-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
I0407 13:46:09.480374 1819972 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
I0407 13:46:11.486698 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:13.986508 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:16.485738 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:19.007617 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:21.485087 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:23.487243 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:25.986791 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:28.486233 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:30.985074 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:32.987247 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:35.486342 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:37.986787 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:40.486108 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:42.486686 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:44.985467 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:46.986540 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:48.992761 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:51.485708 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:53.488061 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:55.986202 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:46:58.486551 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:00.488834 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:02.989313 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:04.989998 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:07.486888 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:09.992665 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:12.486545 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:14.486592 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:16.986585 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:19.489044 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:21.985350 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:23.985715 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:25.986326 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:27.987398 1819972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:28.985871 1819972 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"True"
I0407 13:47:28.985898 1819972 pod_ready.go:82] duration metric: took 1m19.50551601s for pod "kube-controller-manager-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
I0407 13:47:28.985912 1819972 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d8l5m" in "kube-system" namespace to be "Ready" ...
I0407 13:47:28.990341 1819972 pod_ready.go:93] pod "kube-proxy-d8l5m" in "kube-system" namespace has status "Ready":"True"
I0407 13:47:28.990366 1819972 pod_ready.go:82] duration metric: took 4.448112ms for pod "kube-proxy-d8l5m" in "kube-system" namespace to be "Ready" ...
I0407 13:47:28.990378 1819972 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
I0407 13:47:29.000261 1819972 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-169187" in "kube-system" namespace has status "Ready":"True"
I0407 13:47:29.000287 1819972 pod_ready.go:82] duration metric: took 9.901857ms for pod "kube-scheduler-old-k8s-version-169187" in "kube-system" namespace to be "Ready" ...
I0407 13:47:29.000299 1819972 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace to be "Ready" ...
I0407 13:47:31.015306 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:33.505236 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:35.505498 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:37.505592 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:40.031733 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:42.505868 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:44.506299 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:47.007309 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:49.505231 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:51.505641 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:53.506057 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:56.008002 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:47:58.506307 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:00.543344 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:03.007218 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:05.009829 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:07.506066 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:09.511731 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:12.010529 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:14.505657 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:17.006488 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:19.505471 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:21.505795 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:24.009585 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:26.505995 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:29.005067 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:31.013227 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:33.505459 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:35.506271 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:38.009021 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:40.016937 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:42.505303 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:44.505496 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:46.505979 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:49.011069 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:51.505964 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:53.506244 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:55.506390 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:48:58.009508 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:00.028928 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:02.505916 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:05.008679 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:07.505851 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:09.505950 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:11.506765 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:14.008686 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:16.010395 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:18.506565 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:21.008369 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:23.509187 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:26.006852 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:28.506387 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:31.017283 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:33.505743 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:35.506060 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:38.011236 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:40.506321 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:43.007067 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:45.011190 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:47.507746 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:50.009909 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:52.012778 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:54.506025 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:57.007303 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:49:59.505257 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:01.577654 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:04.006322 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:06.010917 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:08.506143 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:10.510360 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:13.007003 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:15.021761 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:17.506342 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:20.018190 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:22.512125 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:25.015143 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:27.506468 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:30.018264 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:32.506128 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:35.009129 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:37.014036 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:39.505741 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:41.506092 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:44.006600 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:46.007826 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:48.008260 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:50.015194 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:52.041098 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:54.505268 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:56.505734 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:50:58.505969 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:00.506749 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:03.019804 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:05.506022 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:08.009389 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:10.014918 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:12.506567 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:15.021540 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:17.505345 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:19.506004 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:22.009002 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:24.009399 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:26.510537 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:29.006696 1819972 pod_ready.go:103] pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace has status "Ready":"False"
I0407 13:51:29.006729 1819972 pod_ready.go:82] duration metric: took 4m0.006422181s for pod "metrics-server-9975d5f86-7rkcc" in "kube-system" namespace to be "Ready" ...
E0407 13:51:29.006738 1819972 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I0407 13:51:29.006746 1819972 pod_ready.go:39] duration metric: took 5m29.055838962s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0407 13:51:29.006765 1819972 api_server.go:52] waiting for apiserver process to appear ...
I0407 13:51:29.006855 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0407 13:51:29.026509 1819972 logs.go:282] 2 containers: [82525be035b3 78f8992ce8b4]
I0407 13:51:29.026600 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0407 13:51:29.048568 1819972 logs.go:282] 2 containers: [b45737d73f96 f4fcf1ba0dce]
I0407 13:51:29.048651 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0407 13:51:29.067406 1819972 logs.go:282] 2 containers: [a2086baae207 d92117844997]
I0407 13:51:29.067532 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0407 13:51:29.087437 1819972 logs.go:282] 2 containers: [fce53c7f2eb0 3a9781764312]
I0407 13:51:29.087614 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0407 13:51:29.106213 1819972 logs.go:282] 2 containers: [062895b6a45a 7cb4581969c6]
I0407 13:51:29.106301 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0407 13:51:29.124943 1819972 logs.go:282] 2 containers: [c2da54d5c256 3e48a853c03b]
I0407 13:51:29.125036 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0407 13:51:29.143294 1819972 logs.go:282] 0 containers: []
W0407 13:51:29.143365 1819972 logs.go:284] No container was found matching "kindnet"
I0407 13:51:29.143439 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0407 13:51:29.164003 1819972 logs.go:282] 1 containers: [c66d59ac00e0]
I0407 13:51:29.164083 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0407 13:51:29.186403 1819972 logs.go:282] 2 containers: [55bf8eb1ab94 fcbefe8497a0]
I0407 13:51:29.186436 1819972 logs.go:123] Gathering logs for describe nodes ...
I0407 13:51:29.186448 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0407 13:51:29.353217 1819972 logs.go:123] Gathering logs for coredns [d92117844997] ...
I0407 13:51:29.353251 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d92117844997"
I0407 13:51:29.387072 1819972 logs.go:123] Gathering logs for container status ...
I0407 13:51:29.387100 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0407 13:51:29.455821 1819972 logs.go:123] Gathering logs for coredns [a2086baae207] ...
I0407 13:51:29.455854 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2086baae207"
I0407 13:51:29.477100 1819972 logs.go:123] Gathering logs for kube-scheduler [fce53c7f2eb0] ...
I0407 13:51:29.477128 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fce53c7f2eb0"
I0407 13:51:29.501934 1819972 logs.go:123] Gathering logs for kube-scheduler [3a9781764312] ...
I0407 13:51:29.501962 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9781764312"
I0407 13:51:29.528570 1819972 logs.go:123] Gathering logs for kube-proxy [7cb4581969c6] ...
I0407 13:51:29.528725 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cb4581969c6"
I0407 13:51:29.553260 1819972 logs.go:123] Gathering logs for kube-controller-manager [c2da54d5c256] ...
I0407 13:51:29.553288 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2da54d5c256"
I0407 13:51:29.596765 1819972 logs.go:123] Gathering logs for kube-controller-manager [3e48a853c03b] ...
I0407 13:51:29.596803 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e48a853c03b"
I0407 13:51:29.647057 1819972 logs.go:123] Gathering logs for storage-provisioner [55bf8eb1ab94] ...
I0407 13:51:29.647091 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55bf8eb1ab94"
I0407 13:51:29.684442 1819972 logs.go:123] Gathering logs for storage-provisioner [fcbefe8497a0] ...
I0407 13:51:29.684472 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcbefe8497a0"
I0407 13:51:29.710207 1819972 logs.go:123] Gathering logs for kube-apiserver [78f8992ce8b4] ...
I0407 13:51:29.710235 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78f8992ce8b4"
I0407 13:51:29.783435 1819972 logs.go:123] Gathering logs for kube-proxy [062895b6a45a] ...
I0407 13:51:29.783469 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 062895b6a45a"
I0407 13:51:29.805438 1819972 logs.go:123] Gathering logs for kubernetes-dashboard [c66d59ac00e0] ...
I0407 13:51:29.805467 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66d59ac00e0"
I0407 13:51:29.827642 1819972 logs.go:123] Gathering logs for Docker ...
I0407 13:51:29.827670 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0407 13:51:29.860550 1819972 logs.go:123] Gathering logs for kubelet ...
I0407 13:51:29.860582 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0407 13:51:29.925833 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905405 1477 reflector.go:138] object-"default"/"default-token-n6f2l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-n6f2l" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:29.926096 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905477 1477 reflector.go:138] object-"kube-system"/"kube-proxy-token-lqq9d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-lqq9d" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:29.926308 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905533 1477 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:29.926617 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905580 1477 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cxdxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cxdxc" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:29.926851 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905695 1477 reflector.go:138] object-"kube-system"/"coredns-token-tttv5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tttv5" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:29.927056 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.906374 1477 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:29.933879 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:01 old-k8s-version-169187 kubelet[1477]: E0407 13:46:01.360618 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:29.934551 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:02 old-k8s-version-169187 kubelet[1477]: E0407 13:46:02.348740 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.935072 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:03 old-k8s-version-169187 kubelet[1477]: E0407 13:46:03.359622 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.937497 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:15 old-k8s-version-169187 kubelet[1477]: E0407 13:46:15.664641 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:29.942139 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.047413 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:29.942704 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.686892 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.942904 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:21 old-k8s-version-169187 kubelet[1477]: E0407 13:46:21.704280 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.943269 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:26 old-k8s-version-169187 kubelet[1477]: E0407 13:46:26.633503 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.943921 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:32 old-k8s-version-169187 kubelet[1477]: E0407 13:46:32.852394 1477 pod_workers.go:191] Error syncing pod 799a1ac5-a9e9-4fd4-b152-afc0c2012231 ("storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"
W0407 13:51:29.946305 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:34 old-k8s-version-169187 kubelet[1477]: E0407 13:46:34.079950 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:29.948727 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:40 old-k8s-version-169187 kubelet[1477]: E0407 13:46:40.651036 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:29.948928 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:45 old-k8s-version-169187 kubelet[1477]: E0407 13:46:45.642017 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.949249 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:52 old-k8s-version-169187 kubelet[1477]: E0407 13:46:52.633525 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.951501 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:00 old-k8s-version-169187 kubelet[1477]: E0407 13:47:00.133509 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:29.951688 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:05 old-k8s-version-169187 kubelet[1477]: E0407 13:47:05.658773 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.951887 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:14 old-k8s-version-169187 kubelet[1477]: E0407 13:47:14.633956 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.952074 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:19 old-k8s-version-169187 kubelet[1477]: E0407 13:47:19.638120 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.952273 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:28 old-k8s-version-169187 kubelet[1477]: E0407 13:47:28.644719 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.954365 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:34 old-k8s-version-169187 kubelet[1477]: E0407 13:47:34.650220 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:29.956620 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:43 old-k8s-version-169187 kubelet[1477]: E0407 13:47:43.179657 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:29.956807 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:47 old-k8s-version-169187 kubelet[1477]: E0407 13:47:47.633825 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.957004 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:54 old-k8s-version-169187 kubelet[1477]: E0407 13:47:54.633728 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.957190 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:58 old-k8s-version-169187 kubelet[1477]: E0407 13:47:58.633722 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.957389 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:07 old-k8s-version-169187 kubelet[1477]: E0407 13:48:07.636662 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.957578 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:09 old-k8s-version-169187 kubelet[1477]: E0407 13:48:09.636763 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.957776 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:22 old-k8s-version-169187 kubelet[1477]: E0407 13:48:22.633343 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.957964 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:24 old-k8s-version-169187 kubelet[1477]: E0407 13:48:24.633503 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.958163 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:35 old-k8s-version-169187 kubelet[1477]: E0407 13:48:35.643057 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.958349 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:38 old-k8s-version-169187 kubelet[1477]: E0407 13:48:38.633424 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.958546 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:47 old-k8s-version-169187 kubelet[1477]: E0407 13:48:47.633680 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.958732 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:53 old-k8s-version-169187 kubelet[1477]: E0407 13:48:53.633745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.958929 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:02 old-k8s-version-169187 kubelet[1477]: E0407 13:49:02.633346 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.961032 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:05 old-k8s-version-169187 kubelet[1477]: E0407 13:49:05.657505 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:29.961220 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:17 old-k8s-version-169187 kubelet[1477]: E0407 13:49:17.634169 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.963463 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:18 old-k8s-version-169187 kubelet[1477]: E0407 13:49:18.080057 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:29.963688 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:29 old-k8s-version-169187 kubelet[1477]: E0407 13:49:29.638278 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.963877 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:30 old-k8s-version-169187 kubelet[1477]: E0407 13:49:30.638576 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.964075 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:41 old-k8s-version-169187 kubelet[1477]: E0407 13:49:41.654838 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.964262 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:43 old-k8s-version-169187 kubelet[1477]: E0407 13:49:43.636263 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.964459 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:56 old-k8s-version-169187 kubelet[1477]: E0407 13:49:56.633358 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.964644 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:57 old-k8s-version-169187 kubelet[1477]: E0407 13:49:57.633386 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.964842 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:07 old-k8s-version-169187 kubelet[1477]: E0407 13:50:07.638473 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.965027 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:12 old-k8s-version-169187 kubelet[1477]: E0407 13:50:12.633385 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.965226 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:22 old-k8s-version-169187 kubelet[1477]: E0407 13:50:22.633440 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.965411 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:25 old-k8s-version-169187 kubelet[1477]: E0407 13:50:25.633558 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.965615 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:37 old-k8s-version-169187 kubelet[1477]: E0407 13:50:37.636391 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.965804 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:39 old-k8s-version-169187 kubelet[1477]: E0407 13:50:39.634229 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.965989 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.633542 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.966187 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.966384 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.966569 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.966766 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:29.966952 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:51:29.966966 1819972 logs.go:123] Gathering logs for dmesg ...
I0407 13:51:29.966985 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0407 13:51:29.986242 1819972 logs.go:123] Gathering logs for kube-apiserver [82525be035b3] ...
I0407 13:51:29.986276 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82525be035b3"
I0407 13:51:30.086153 1819972 logs.go:123] Gathering logs for etcd [b45737d73f96] ...
I0407 13:51:30.086196 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b45737d73f96"
I0407 13:51:30.124690 1819972 logs.go:123] Gathering logs for etcd [f4fcf1ba0dce] ...
I0407 13:51:30.124744 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4fcf1ba0dce"
I0407 13:51:30.155699 1819972 out.go:358] Setting ErrFile to fd 2...
I0407 13:51:30.155727 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0407 13:51:30.155790 1819972 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0407 13:51:30.155801 1819972 out.go:270] Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:30.155808 1819972 out.go:270] Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:30.155886 1819972 out.go:270] Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:30.155894 1819972 out.go:270] Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:30.155899 1819972 out.go:270] Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:51:30.155904 1819972 out.go:358] Setting ErrFile to fd 2...
I0407 13:51:30.155909 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:51:40.157136 1819972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 13:51:40.169890 1819972 api_server.go:72] duration metric: took 5m53.648300907s to wait for apiserver process to appear ...
I0407 13:51:40.169915 1819972 api_server.go:88] waiting for apiserver healthz status ...
I0407 13:51:40.170006 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0407 13:51:40.192333 1819972 logs.go:282] 2 containers: [82525be035b3 78f8992ce8b4]
I0407 13:51:40.192413 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0407 13:51:40.216209 1819972 logs.go:282] 2 containers: [b45737d73f96 f4fcf1ba0dce]
I0407 13:51:40.216295 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0407 13:51:40.239039 1819972 logs.go:282] 2 containers: [a2086baae207 d92117844997]
I0407 13:51:40.239125 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0407 13:51:40.262354 1819972 logs.go:282] 2 containers: [fce53c7f2eb0 3a9781764312]
I0407 13:51:40.262433 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0407 13:51:40.283816 1819972 logs.go:282] 2 containers: [062895b6a45a 7cb4581969c6]
I0407 13:51:40.283903 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0407 13:51:40.305544 1819972 logs.go:282] 2 containers: [c2da54d5c256 3e48a853c03b]
I0407 13:51:40.305632 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0407 13:51:40.326317 1819972 logs.go:282] 0 containers: []
W0407 13:51:40.326339 1819972 logs.go:284] No container was found matching "kindnet"
I0407 13:51:40.326394 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0407 13:51:40.346269 1819972 logs.go:282] 2 containers: [55bf8eb1ab94 fcbefe8497a0]
I0407 13:51:40.346404 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0407 13:51:40.368221 1819972 logs.go:282] 1 containers: [c66d59ac00e0]
I0407 13:51:40.368255 1819972 logs.go:123] Gathering logs for etcd [b45737d73f96] ...
I0407 13:51:40.368267 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b45737d73f96"
I0407 13:51:40.405468 1819972 logs.go:123] Gathering logs for coredns [a2086baae207] ...
I0407 13:51:40.405507 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2086baae207"
I0407 13:51:40.429912 1819972 logs.go:123] Gathering logs for kube-scheduler [3a9781764312] ...
I0407 13:51:40.429941 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9781764312"
I0407 13:51:40.463734 1819972 logs.go:123] Gathering logs for kube-proxy [7cb4581969c6] ...
I0407 13:51:40.463768 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cb4581969c6"
I0407 13:51:40.490415 1819972 logs.go:123] Gathering logs for kubernetes-dashboard [c66d59ac00e0] ...
I0407 13:51:40.490443 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66d59ac00e0"
I0407 13:51:40.524475 1819972 logs.go:123] Gathering logs for kubelet ...
I0407 13:51:40.524504 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0407 13:51:40.585432 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905405 1477 reflector.go:138] object-"default"/"default-token-n6f2l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-n6f2l" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.585693 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905477 1477 reflector.go:138] object-"kube-system"/"kube-proxy-token-lqq9d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-lqq9d" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.585903 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905533 1477 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.586131 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905580 1477 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cxdxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cxdxc" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.586341 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905695 1477 reflector.go:138] object-"kube-system"/"coredns-token-tttv5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tttv5" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.586541 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.906374 1477 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.593243 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:01 old-k8s-version-169187 kubelet[1477]: E0407 13:46:01.360618 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.593907 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:02 old-k8s-version-169187 kubelet[1477]: E0407 13:46:02.348740 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.594420 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:03 old-k8s-version-169187 kubelet[1477]: E0407 13:46:03.359622 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.596854 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:15 old-k8s-version-169187 kubelet[1477]: E0407 13:46:15.664641 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.601421 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.047413 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.601983 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.686892 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.602183 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:21 old-k8s-version-169187 kubelet[1477]: E0407 13:46:21.704280 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.602545 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:26 old-k8s-version-169187 kubelet[1477]: E0407 13:46:26.633503 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.603191 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:32 old-k8s-version-169187 kubelet[1477]: E0407 13:46:32.852394 1477 pod_workers.go:191] Error syncing pod 799a1ac5-a9e9-4fd4-b152-afc0c2012231 ("storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"
W0407 13:51:40.605554 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:34 old-k8s-version-169187 kubelet[1477]: E0407 13:46:34.079950 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.607980 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:40 old-k8s-version-169187 kubelet[1477]: E0407 13:46:40.651036 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.608182 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:45 old-k8s-version-169187 kubelet[1477]: E0407 13:46:45.642017 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.608499 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:52 old-k8s-version-169187 kubelet[1477]: E0407 13:46:52.633525 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.610729 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:00 old-k8s-version-169187 kubelet[1477]: E0407 13:47:00.133509 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.610915 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:05 old-k8s-version-169187 kubelet[1477]: E0407 13:47:05.658773 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.611111 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:14 old-k8s-version-169187 kubelet[1477]: E0407 13:47:14.633956 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.611295 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:19 old-k8s-version-169187 kubelet[1477]: E0407 13:47:19.638120 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.611490 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:28 old-k8s-version-169187 kubelet[1477]: E0407 13:47:28.644719 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.613571 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:34 old-k8s-version-169187 kubelet[1477]: E0407 13:47:34.650220 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.615805 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:43 old-k8s-version-169187 kubelet[1477]: E0407 13:47:43.179657 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.615992 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:47 old-k8s-version-169187 kubelet[1477]: E0407 13:47:47.633825 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616188 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:54 old-k8s-version-169187 kubelet[1477]: E0407 13:47:54.633728 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616374 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:58 old-k8s-version-169187 kubelet[1477]: E0407 13:47:58.633722 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616569 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:07 old-k8s-version-169187 kubelet[1477]: E0407 13:48:07.636662 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616749 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:09 old-k8s-version-169187 kubelet[1477]: E0407 13:48:09.636763 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616941 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:22 old-k8s-version-169187 kubelet[1477]: E0407 13:48:22.633343 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617122 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:24 old-k8s-version-169187 kubelet[1477]: E0407 13:48:24.633503 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617319 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:35 old-k8s-version-169187 kubelet[1477]: E0407 13:48:35.643057 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617507 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:38 old-k8s-version-169187 kubelet[1477]: E0407 13:48:38.633424 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617702 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:47 old-k8s-version-169187 kubelet[1477]: E0407 13:48:47.633680 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617885 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:53 old-k8s-version-169187 kubelet[1477]: E0407 13:48:53.633745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.618081 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:02 old-k8s-version-169187 kubelet[1477]: E0407 13:49:02.633346 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.620175 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:05 old-k8s-version-169187 kubelet[1477]: E0407 13:49:05.657505 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.620361 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:17 old-k8s-version-169187 kubelet[1477]: E0407 13:49:17.634169 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.622592 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:18 old-k8s-version-169187 kubelet[1477]: E0407 13:49:18.080057 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.622790 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:29 old-k8s-version-169187 kubelet[1477]: E0407 13:49:29.638278 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.622975 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:30 old-k8s-version-169187 kubelet[1477]: E0407 13:49:30.638576 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623170 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:41 old-k8s-version-169187 kubelet[1477]: E0407 13:49:41.654838 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623354 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:43 old-k8s-version-169187 kubelet[1477]: E0407 13:49:43.636263 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623557 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:56 old-k8s-version-169187 kubelet[1477]: E0407 13:49:56.633358 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623742 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:57 old-k8s-version-169187 kubelet[1477]: E0407 13:49:57.633386 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623944 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:07 old-k8s-version-169187 kubelet[1477]: E0407 13:50:07.638473 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624128 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:12 old-k8s-version-169187 kubelet[1477]: E0407 13:50:12.633385 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624326 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:22 old-k8s-version-169187 kubelet[1477]: E0407 13:50:22.633440 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624510 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:25 old-k8s-version-169187 kubelet[1477]: E0407 13:50:25.633558 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624729 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:37 old-k8s-version-169187 kubelet[1477]: E0407 13:50:37.636391 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624919 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:39 old-k8s-version-169187 kubelet[1477]: E0407 13:50:39.634229 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625102 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.633542 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625298 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625497 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625681 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625877 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.626063 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.626260 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.626442 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:51:40.626453 1819972 logs.go:123] Gathering logs for kube-apiserver [82525be035b3] ...
I0407 13:51:40.626467 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82525be035b3"
I0407 13:51:40.692053 1819972 logs.go:123] Gathering logs for etcd [f4fcf1ba0dce] ...
I0407 13:51:40.692095 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4fcf1ba0dce"
I0407 13:51:40.730914 1819972 logs.go:123] Gathering logs for kube-scheduler [fce53c7f2eb0] ...
I0407 13:51:40.731004 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fce53c7f2eb0"
I0407 13:51:40.756163 1819972 logs.go:123] Gathering logs for kube-proxy [062895b6a45a] ...
I0407 13:51:40.756192 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 062895b6a45a"
I0407 13:51:40.779888 1819972 logs.go:123] Gathering logs for container status ...
I0407 13:51:40.779915 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0407 13:51:40.830573 1819972 logs.go:123] Gathering logs for kube-controller-manager [c2da54d5c256] ...
I0407 13:51:40.830603 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2da54d5c256"
I0407 13:51:40.883151 1819972 logs.go:123] Gathering logs for dmesg ...
I0407 13:51:40.883193 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0407 13:51:40.899720 1819972 logs.go:123] Gathering logs for describe nodes ...
I0407 13:51:40.899749 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0407 13:51:41.056544 1819972 logs.go:123] Gathering logs for coredns [d92117844997] ...
I0407 13:51:41.056575 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d92117844997"
I0407 13:51:41.081183 1819972 logs.go:123] Gathering logs for storage-provisioner [55bf8eb1ab94] ...
I0407 13:51:41.081212 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55bf8eb1ab94"
I0407 13:51:41.104294 1819972 logs.go:123] Gathering logs for storage-provisioner [fcbefe8497a0] ...
I0407 13:51:41.104324 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcbefe8497a0"
I0407 13:51:41.126590 1819972 logs.go:123] Gathering logs for Docker ...
I0407 13:51:41.126619 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0407 13:51:41.152068 1819972 logs.go:123] Gathering logs for kube-controller-manager [3e48a853c03b] ...
I0407 13:51:41.152100 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e48a853c03b"
I0407 13:51:41.190557 1819972 logs.go:123] Gathering logs for kube-apiserver [78f8992ce8b4] ...
I0407 13:51:41.190633 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78f8992ce8b4"
I0407 13:51:41.268535 1819972 out.go:358] Setting ErrFile to fd 2...
I0407 13:51:41.268567 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0407 13:51:41.268629 1819972 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W0407 13:51:41.268642 1819972 out.go:270] Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:41.268652 1819972 out.go:270] Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:41.268659 1819972 out.go:270] Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:41.268664 1819972 out.go:270] Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:41.268672 1819972 out.go:270] Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:51:41.268683 1819972 out.go:358] Setting ErrFile to fd 2...
I0407 13:51:41.268688 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:51:51.270356 1819972 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0407 13:51:51.280041 1819972 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0407 13:51:51.283661 1819972 out.go:201]
W0407 13:51:51.286521 1819972 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0407 13:51:51.286680 1819972 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0407 13:51:51.286738 1819972 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0407 13:51:51.286781 1819972 out.go:270] *
*
W0407 13:51:51.287792 1819972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0407 13:51:51.291196 1819972 out.go:201]
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-169187 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=docker --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-169187
helpers_test.go:235: (dbg) docker inspect old-k8s-version-169187:
-- stdout --
[
{
"Id": "685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f",
"Created": "2025-04-07T13:43:02.411484895Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1820102,
"ExitCode": 0,
"Error": "",
"StartedAt": "2025-04-07T13:45:37.789338054Z",
"FinishedAt": "2025-04-07T13:45:36.542041628Z"
},
"Image": "sha256:1a97cd9e9bbab266425b883d3ed87fee4969302ed9a49ce4df4bf460f6bbf404",
"ResolvConfPath": "/var/lib/docker/containers/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f/hostname",
"HostsPath": "/var/lib/docker/containers/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f/hosts",
"LogPath": "/var/lib/docker/containers/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f/685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f-json.log",
"Name": "/old-k8s-version-169187",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-169187:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-169187",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"ID": "685e713d0440990ac7eaa4f04d0b10b9ef68187d9df8fa217a8bbe7f1d9b838f",
"LowerDir": "/var/lib/docker/overlay2/7c8ee11aeeb34c5fedcbaebec6ba343fe606838c44b7d26de5867ee0103fb670-init/diff:/var/lib/docker/overlay2/2fffce34c50e77173db4df34163cc0f451b50794e01d4ae821270ba6f3468b6b/diff",
"MergedDir": "/var/lib/docker/overlay2/7c8ee11aeeb34c5fedcbaebec6ba343fe606838c44b7d26de5867ee0103fb670/merged",
"UpperDir": "/var/lib/docker/overlay2/7c8ee11aeeb34c5fedcbaebec6ba343fe606838c44b7d26de5867ee0103fb670/diff",
"WorkDir": "/var/lib/docker/overlay2/7c8ee11aeeb34c5fedcbaebec6ba343fe606838c44b7d26de5867ee0103fb670/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-169187",
"Source": "/var/lib/docker/volumes/old-k8s-version-169187/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-169187",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-169187",
"name.minikube.sigs.k8s.io": "old-k8s-version-169187",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "9892ed8f0235061ccb7c524b9650d9f6612ddc6e9d4b8c5e22a969c98e67de8f",
"SandboxKey": "/var/run/docker/netns/9892ed8f0235",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34611"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34612"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34615"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34613"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "34614"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-169187": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "c6:c6:fc:25:51:94",
"DriverOpts": null,
"GwPriority": 0,
"NetworkID": "760c8c32a067b6850b152d6c6a7ed72d95e95fc7589c598a629781795e2c2278",
"EndpointID": "6d240c4f72f351da7768bf9e2bab94c1998c8f98dbb6bb04bd06a5533056cd17",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-169187",
"685e713d0440"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-169187 -n old-k8s-version-169187
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-169187 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-169187 logs -n 25: (2.023512114s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
| ssh | docker-flags-055908 ssh | docker-flags-055908 | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=Environment | | | | | |
| | --no-pager | | | | | |
| ssh | docker-flags-055908 ssh | docker-flags-055908 | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
| | sudo systemctl show docker | | | | | |
| | --property=ExecStart | | | | | |
| | --no-pager | | | | | |
| delete | -p docker-flags-055908 | docker-flags-055908 | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
| start | -p cert-options-925217 | cert-options-925217 | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| ssh | cert-options-925217 ssh | cert-options-925217 | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-925217 -- sudo | cert-options-925217 | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-925217 | cert-options-925217 | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:42 UTC |
| start | -p old-k8s-version-169187 | old-k8s-version-169187 | jenkins | v1.35.0 | 07 Apr 25 13:42 UTC | 07 Apr 25 13:45 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-687877 | cert-expiration-687877 | jenkins | v1.35.0 | 07 Apr 25 13:44 UTC | 07 Apr 25 13:45 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| delete | -p cert-expiration-687877 | cert-expiration-687877 | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:45 UTC |
| start | -p | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:46 UTC |
| | default-k8s-diff-port-872084 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| addons | enable metrics-server -p old-k8s-version-169187 | old-k8s-version-169187 | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:45 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-169187 | old-k8s-version-169187 | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:45 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-169187 | old-k8s-version-169187 | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | 07 Apr 25 13:45 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-169187 | old-k8s-version-169187 | jenkins | v1.35.0 | 07 Apr 25 13:45 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p default-k8s-diff-port-872084 | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:46 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:46 UTC |
| | default-k8s-diff-port-872084 | | | | | |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p default-k8s-diff-port-872084 | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:46 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:46 UTC | 07 Apr 25 13:51 UTC |
| | default-k8s-diff-port-872084 | | | | | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --apiserver-port=8444 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
| image | default-k8s-diff-port-872084 | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
| | image list --format=json | | | | | |
| pause | -p | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
| | default-k8s-diff-port-872084 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
| | default-k8s-diff-port-872084 | | | | | |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
| | default-k8s-diff-port-872084 | | | | | |
| delete | -p | default-k8s-diff-port-872084 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | 07 Apr 25 13:51 UTC |
| | default-k8s-diff-port-872084 | | | | | |
| start | -p embed-certs-690840 | embed-certs-690840 | jenkins | v1.35.0 | 07 Apr 25 13:51 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=docker | | | | | |
| | --kubernetes-version=v1.32.2 | | | | | |
|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2025/04/07 13:51:36
Running on machine: ip-172-31-30-239
Binary: Built with gc go1.24.0 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0407 13:51:36.857946 1834508 out.go:345] Setting OutFile to fd 1 ...
I0407 13:51:36.858439 1834508 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:51:36.858477 1834508 out.go:358] Setting ErrFile to fd 2...
I0407 13:51:36.858499 1834508 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:51:36.858795 1834508 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20598-1489638/.minikube/bin
I0407 13:51:36.859255 1834508 out.go:352] Setting JSON to false
I0407 13:51:36.860348 1834508 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":27245,"bootTime":1744006652,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1081-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
I0407 13:51:36.860458 1834508 start.go:139] virtualization:
I0407 13:51:36.864378 1834508 out.go:177] * [embed-certs-690840] minikube v1.35.0 on Ubuntu 20.04 (arm64)
I0407 13:51:36.868591 1834508 out.go:177] - MINIKUBE_LOCATION=20598
I0407 13:51:36.868733 1834508 notify.go:220] Checking for updates...
I0407 13:51:36.872805 1834508 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0407 13:51:36.875866 1834508 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/20598-1489638/kubeconfig
I0407 13:51:36.878980 1834508 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/20598-1489638/.minikube
I0407 13:51:36.881850 1834508 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0407 13:51:36.884793 1834508 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0407 13:51:36.888375 1834508 config.go:182] Loaded profile config "old-k8s-version-169187": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.20.0
I0407 13:51:36.888489 1834508 driver.go:394] Setting default libvirt URI to qemu:///system
I0407 13:51:36.913091 1834508 docker.go:123] docker version: linux-28.0.4:Docker Engine - Community
I0407 13:51:36.913209 1834508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 13:51:36.971613 1834508 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:51:36.961703143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0407 13:51:36.971725 1834508 docker.go:318] overlay module found
I0407 13:51:36.974953 1834508 out.go:177] * Using the docker driver based on user configuration
I0407 13:51:36.977895 1834508 start.go:297] selected driver: docker
I0407 13:51:36.977914 1834508 start.go:901] validating driver "docker" against <nil>
I0407 13:51:36.977929 1834508 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0407 13:51:36.978654 1834508 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0407 13:51:37.044856 1834508 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-04-07 13:51:37.030742658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1081-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.0.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:05044ec0a9a75232cad458027ca83437aae3f4da} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:v1.2.5-0-g59923ef} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.22.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.34.0]] Warnings:<nil>}}
I0407 13:51:37.045006 1834508 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0407 13:51:37.045245 1834508 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0407 13:51:37.048977 1834508 out.go:177] * Using Docker driver with root privileges
I0407 13:51:37.051875 1834508 cni.go:84] Creating CNI manager for ""
I0407 13:51:37.051959 1834508 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0407 13:51:37.051973 1834508 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
I0407 13:51:37.052056 1834508 start.go:340] cluster config:
{Name:embed-certs-690840 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-690840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0407 13:51:37.055154 1834508 out.go:177] * Starting "embed-certs-690840" primary control-plane node in "embed-certs-690840" cluster
I0407 13:51:37.057976 1834508 cache.go:121] Beginning downloading kic base image for docker with docker
I0407 13:51:37.060926 1834508 out.go:177] * Pulling base image v0.0.46-1743675393-20591 ...
I0407 13:51:37.063825 1834508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 13:51:37.063886 1834508 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4
I0407 13:51:37.063899 1834508 cache.go:56] Caching tarball of preloaded images
I0407 13:51:37.063924 1834508 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon
I0407 13:51:37.064014 1834508 preload.go:172] Found /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0407 13:51:37.064025 1834508 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
I0407 13:51:37.064146 1834508 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/config.json ...
I0407 13:51:37.064177 1834508 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/config.json: {Name:mkf128e7c0f140aadfda249a3ce6b29741225e11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:51:37.083303 1834508 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 in local docker daemon, skipping pull
I0407 13:51:37.083326 1834508 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 exists in daemon, skipping load
I0407 13:51:37.083344 1834508 cache.go:230] Successfully downloaded all kic artifacts
I0407 13:51:37.083373 1834508 start.go:360] acquireMachinesLock for embed-certs-690840: {Name:mk78a25da2d634e43a1d98409ffb7d56e161fa1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0407 13:51:37.084125 1834508 start.go:364] duration metric: took 730.325µs to acquireMachinesLock for "embed-certs-690840"
I0407 13:51:37.084163 1834508 start.go:93] Provisioning new machine with config: &{Name:embed-certs-690840 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-690840 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
I0407 13:51:37.084238 1834508 start.go:125] createHost starting for "" (driver="docker")
I0407 13:51:37.087569 1834508 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0407 13:51:37.087833 1834508 start.go:159] libmachine.API.Create for "embed-certs-690840" (driver="docker")
I0407 13:51:37.087871 1834508 client.go:168] LocalClient.Create starting
I0407 13:51:37.087935 1834508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem
I0407 13:51:37.087978 1834508 main.go:141] libmachine: Decoding PEM data...
I0407 13:51:37.087995 1834508 main.go:141] libmachine: Parsing certificate...
I0407 13:51:37.088054 1834508 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem
I0407 13:51:37.088077 1834508 main.go:141] libmachine: Decoding PEM data...
I0407 13:51:37.088090 1834508 main.go:141] libmachine: Parsing certificate...
I0407 13:51:37.088444 1834508 cli_runner.go:164] Run: docker network inspect embed-certs-690840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0407 13:51:37.104834 1834508 cli_runner.go:211] docker network inspect embed-certs-690840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0407 13:51:37.104923 1834508 network_create.go:284] running [docker network inspect embed-certs-690840] to gather additional debugging logs...
I0407 13:51:37.104944 1834508 cli_runner.go:164] Run: docker network inspect embed-certs-690840
W0407 13:51:37.120532 1834508 cli_runner.go:211] docker network inspect embed-certs-690840 returned with exit code 1
I0407 13:51:37.120579 1834508 network_create.go:287] error running [docker network inspect embed-certs-690840]: docker network inspect embed-certs-690840: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-690840 not found
I0407 13:51:37.120593 1834508 network_create.go:289] output of [docker network inspect embed-certs-690840]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-690840 not found
** /stderr **
I0407 13:51:37.120810 1834508 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 13:51:37.138571 1834508 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cb68a24093bb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:32:02:6c:69:0b:7a} reservation:<nil>}
I0407 13:51:37.138981 1834508 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-0e1fc9d3957e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:9f:61:c6:81:75} reservation:<nil>}
I0407 13:51:37.139310 1834508 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-9e29b45f042f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:1e:9c:a9:01:d7} reservation:<nil>}
I0407 13:51:37.139633 1834508 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-760c8c32a067 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:67:ff:89:49:26} reservation:<nil>}
I0407 13:51:37.140057 1834508 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019de200}
I0407 13:51:37.140082 1834508 network_create.go:124] attempt to create docker network embed-certs-690840 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0407 13:51:37.140142 1834508 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-690840 embed-certs-690840
I0407 13:51:37.202907 1834508 network_create.go:108] docker network embed-certs-690840 192.168.85.0/24 created
I0407 13:51:37.202941 1834508 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-690840" container
I0407 13:51:37.203016 1834508 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0407 13:51:37.226184 1834508 cli_runner.go:164] Run: docker volume create embed-certs-690840 --label name.minikube.sigs.k8s.io=embed-certs-690840 --label created_by.minikube.sigs.k8s.io=true
I0407 13:51:37.245634 1834508 oci.go:103] Successfully created a docker volume embed-certs-690840
I0407 13:51:37.245719 1834508 cli_runner.go:164] Run: docker run --rm --name embed-certs-690840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-690840 --entrypoint /usr/bin/test -v embed-certs-690840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -d /var/lib
I0407 13:51:37.783807 1834508 oci.go:107] Successfully prepared a docker volume embed-certs-690840
I0407 13:51:37.783857 1834508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 13:51:37.783878 1834508 kic.go:194] Starting extracting preloaded images to volume ...
I0407 13:51:37.783942 1834508 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-690840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir
I0407 13:51:41.596764 1834508 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20598-1489638/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-690840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 -I lz4 -xf /preloaded.tar -C /extractDir: (3.812778481s)
I0407 13:51:41.596807 1834508 kic.go:203] duration metric: took 3.812925148s to extract preloaded images to volume ...
W0407 13:51:41.596949 1834508 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0407 13:51:41.597069 1834508 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0407 13:51:41.659736 1834508 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-690840 --name embed-certs-690840 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-690840 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-690840 --network embed-certs-690840 --ip 192.168.85.2 --volume embed-certs-690840:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727
I0407 13:51:40.157136 1819972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0407 13:51:40.169890 1819972 api_server.go:72] duration metric: took 5m53.648300907s to wait for apiserver process to appear ...
I0407 13:51:40.169915 1819972 api_server.go:88] waiting for apiserver healthz status ...
I0407 13:51:40.170006 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0407 13:51:40.192333 1819972 logs.go:282] 2 containers: [82525be035b3 78f8992ce8b4]
I0407 13:51:40.192413 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0407 13:51:40.216209 1819972 logs.go:282] 2 containers: [b45737d73f96 f4fcf1ba0dce]
I0407 13:51:40.216295 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0407 13:51:40.239039 1819972 logs.go:282] 2 containers: [a2086baae207 d92117844997]
I0407 13:51:40.239125 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0407 13:51:40.262354 1819972 logs.go:282] 2 containers: [fce53c7f2eb0 3a9781764312]
I0407 13:51:40.262433 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0407 13:51:40.283816 1819972 logs.go:282] 2 containers: [062895b6a45a 7cb4581969c6]
I0407 13:51:40.283903 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0407 13:51:40.305544 1819972 logs.go:282] 2 containers: [c2da54d5c256 3e48a853c03b]
I0407 13:51:40.305632 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
I0407 13:51:40.326317 1819972 logs.go:282] 0 containers: []
W0407 13:51:40.326339 1819972 logs.go:284] No container was found matching "kindnet"
I0407 13:51:40.326394 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0407 13:51:40.346269 1819972 logs.go:282] 2 containers: [55bf8eb1ab94 fcbefe8497a0]
I0407 13:51:40.346404 1819972 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0407 13:51:40.368221 1819972 logs.go:282] 1 containers: [c66d59ac00e0]
I0407 13:51:40.368255 1819972 logs.go:123] Gathering logs for etcd [b45737d73f96] ...
I0407 13:51:40.368267 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b45737d73f96"
I0407 13:51:40.405468 1819972 logs.go:123] Gathering logs for coredns [a2086baae207] ...
I0407 13:51:40.405507 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2086baae207"
I0407 13:51:40.429912 1819972 logs.go:123] Gathering logs for kube-scheduler [3a9781764312] ...
I0407 13:51:40.429941 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a9781764312"
I0407 13:51:40.463734 1819972 logs.go:123] Gathering logs for kube-proxy [7cb4581969c6] ...
I0407 13:51:40.463768 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cb4581969c6"
I0407 13:51:40.490415 1819972 logs.go:123] Gathering logs for kubernetes-dashboard [c66d59ac00e0] ...
I0407 13:51:40.490443 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c66d59ac00e0"
I0407 13:51:40.524475 1819972 logs.go:123] Gathering logs for kubelet ...
I0407 13:51:40.524504 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0407 13:51:40.585432 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905405 1477 reflector.go:138] object-"default"/"default-token-n6f2l": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-n6f2l" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.585693 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905477 1477 reflector.go:138] object-"kube-system"/"kube-proxy-token-lqq9d": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-lqq9d" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.585903 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905533 1477 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.586131 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905580 1477 reflector.go:138] object-"kube-system"/"storage-provisioner-token-cxdxc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-cxdxc" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.586341 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.905695 1477 reflector.go:138] object-"kube-system"/"coredns-token-tttv5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tttv5" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.586541 1819972 logs.go:138] Found kubelet problem: Apr 07 13:45:59 old-k8s-version-169187 kubelet[1477]: E0407 13:45:59.906374 1477 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-169187" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-169187' and this object
W0407 13:51:40.593243 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:01 old-k8s-version-169187 kubelet[1477]: E0407 13:46:01.360618 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.593907 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:02 old-k8s-version-169187 kubelet[1477]: E0407 13:46:02.348740 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.594420 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:03 old-k8s-version-169187 kubelet[1477]: E0407 13:46:03.359622 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.596854 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:15 old-k8s-version-169187 kubelet[1477]: E0407 13:46:15.664641 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.601421 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.047413 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.601983 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:20 old-k8s-version-169187 kubelet[1477]: E0407 13:46:20.686892 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.602183 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:21 old-k8s-version-169187 kubelet[1477]: E0407 13:46:21.704280 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.602545 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:26 old-k8s-version-169187 kubelet[1477]: E0407 13:46:26.633503 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.603191 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:32 old-k8s-version-169187 kubelet[1477]: E0407 13:46:32.852394 1477 pod_workers.go:191] Error syncing pod 799a1ac5-a9e9-4fd4-b152-afc0c2012231 ("storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(799a1ac5-a9e9-4fd4-b152-afc0c2012231)"
W0407 13:51:40.605554 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:34 old-k8s-version-169187 kubelet[1477]: E0407 13:46:34.079950 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.607980 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:40 old-k8s-version-169187 kubelet[1477]: E0407 13:46:40.651036 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.608182 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:45 old-k8s-version-169187 kubelet[1477]: E0407 13:46:45.642017 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.608499 1819972 logs.go:138] Found kubelet problem: Apr 07 13:46:52 old-k8s-version-169187 kubelet[1477]: E0407 13:46:52.633525 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.610729 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:00 old-k8s-version-169187 kubelet[1477]: E0407 13:47:00.133509 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.610915 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:05 old-k8s-version-169187 kubelet[1477]: E0407 13:47:05.658773 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.611111 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:14 old-k8s-version-169187 kubelet[1477]: E0407 13:47:14.633956 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.611295 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:19 old-k8s-version-169187 kubelet[1477]: E0407 13:47:19.638120 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.611490 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:28 old-k8s-version-169187 kubelet[1477]: E0407 13:47:28.644719 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.613571 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:34 old-k8s-version-169187 kubelet[1477]: E0407 13:47:34.650220 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.615805 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:43 old-k8s-version-169187 kubelet[1477]: E0407 13:47:43.179657 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.615992 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:47 old-k8s-version-169187 kubelet[1477]: E0407 13:47:47.633825 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616188 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:54 old-k8s-version-169187 kubelet[1477]: E0407 13:47:54.633728 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616374 1819972 logs.go:138] Found kubelet problem: Apr 07 13:47:58 old-k8s-version-169187 kubelet[1477]: E0407 13:47:58.633722 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616569 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:07 old-k8s-version-169187 kubelet[1477]: E0407 13:48:07.636662 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616749 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:09 old-k8s-version-169187 kubelet[1477]: E0407 13:48:09.636763 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.616941 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:22 old-k8s-version-169187 kubelet[1477]: E0407 13:48:22.633343 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617122 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:24 old-k8s-version-169187 kubelet[1477]: E0407 13:48:24.633503 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617319 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:35 old-k8s-version-169187 kubelet[1477]: E0407 13:48:35.643057 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617507 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:38 old-k8s-version-169187 kubelet[1477]: E0407 13:48:38.633424 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617702 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:47 old-k8s-version-169187 kubelet[1477]: E0407 13:48:47.633680 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.617885 1819972 logs.go:138] Found kubelet problem: Apr 07 13:48:53 old-k8s-version-169187 kubelet[1477]: E0407 13:48:53.633745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.618081 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:02 old-k8s-version-169187 kubelet[1477]: E0407 13:49:02.633346 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.620175 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:05 old-k8s-version-169187 kubelet[1477]: E0407 13:49:05.657505 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0407 13:51:40.620361 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:17 old-k8s-version-169187 kubelet[1477]: E0407 13:49:17.634169 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.622592 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:18 old-k8s-version-169187 kubelet[1477]: E0407 13:49:18.080057 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ErrImagePull: "rpc error: code = Unknown desc = [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
W0407 13:51:40.622790 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:29 old-k8s-version-169187 kubelet[1477]: E0407 13:49:29.638278 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.622975 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:30 old-k8s-version-169187 kubelet[1477]: E0407 13:49:30.638576 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623170 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:41 old-k8s-version-169187 kubelet[1477]: E0407 13:49:41.654838 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623354 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:43 old-k8s-version-169187 kubelet[1477]: E0407 13:49:43.636263 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623557 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:56 old-k8s-version-169187 kubelet[1477]: E0407 13:49:56.633358 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623742 1819972 logs.go:138] Found kubelet problem: Apr 07 13:49:57 old-k8s-version-169187 kubelet[1477]: E0407 13:49:57.633386 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.623944 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:07 old-k8s-version-169187 kubelet[1477]: E0407 13:50:07.638473 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624128 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:12 old-k8s-version-169187 kubelet[1477]: E0407 13:50:12.633385 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624326 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:22 old-k8s-version-169187 kubelet[1477]: E0407 13:50:22.633440 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624510 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:25 old-k8s-version-169187 kubelet[1477]: E0407 13:50:25.633558 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624729 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:37 old-k8s-version-169187 kubelet[1477]: E0407 13:50:37.636391 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.624919 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:39 old-k8s-version-169187 kubelet[1477]: E0407 13:50:39.634229 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625102 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.633542 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625298 1819972 logs.go:138] Found kubelet problem: Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625497 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625681 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.625877 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.626063 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.626260 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:40.626442 1819972 logs.go:138] Found kubelet problem: Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:51:40.626453 1819972 logs.go:123] Gathering logs for kube-apiserver [82525be035b3] ...
I0407 13:51:40.626467 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82525be035b3"
I0407 13:51:40.692053 1819972 logs.go:123] Gathering logs for etcd [f4fcf1ba0dce] ...
I0407 13:51:40.692095 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4fcf1ba0dce"
I0407 13:51:40.730914 1819972 logs.go:123] Gathering logs for kube-scheduler [fce53c7f2eb0] ...
I0407 13:51:40.731004 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fce53c7f2eb0"
I0407 13:51:40.756163 1819972 logs.go:123] Gathering logs for kube-proxy [062895b6a45a] ...
I0407 13:51:40.756192 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 062895b6a45a"
I0407 13:51:40.779888 1819972 logs.go:123] Gathering logs for container status ...
I0407 13:51:40.779915 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0407 13:51:40.830573 1819972 logs.go:123] Gathering logs for kube-controller-manager [c2da54d5c256] ...
I0407 13:51:40.830603 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c2da54d5c256"
I0407 13:51:40.883151 1819972 logs.go:123] Gathering logs for dmesg ...
I0407 13:51:40.883193 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0407 13:51:40.899720 1819972 logs.go:123] Gathering logs for describe nodes ...
I0407 13:51:40.899749 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0407 13:51:41.056544 1819972 logs.go:123] Gathering logs for coredns [d92117844997] ...
I0407 13:51:41.056575 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d92117844997"
I0407 13:51:41.081183 1819972 logs.go:123] Gathering logs for storage-provisioner [55bf8eb1ab94] ...
I0407 13:51:41.081212 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 55bf8eb1ab94"
I0407 13:51:41.104294 1819972 logs.go:123] Gathering logs for storage-provisioner [fcbefe8497a0] ...
I0407 13:51:41.104324 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fcbefe8497a0"
I0407 13:51:41.126590 1819972 logs.go:123] Gathering logs for Docker ...
I0407 13:51:41.126619 1819972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
I0407 13:51:41.152068 1819972 logs.go:123] Gathering logs for kube-controller-manager [3e48a853c03b] ...
I0407 13:51:41.152100 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3e48a853c03b"
I0407 13:51:41.190557 1819972 logs.go:123] Gathering logs for kube-apiserver [78f8992ce8b4] ...
I0407 13:51:41.190633 1819972 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 78f8992ce8b4"
I0407 13:51:41.268535 1819972 out.go:358] Setting ErrFile to fd 2...
I0407 13:51:41.268567 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
W0407 13:51:41.268629 1819972 out.go:270] X Problems detected in kubelet:
W0407 13:51:41.268642 1819972 out.go:270] Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:41.268652 1819972 out.go:270] Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:41.268659 1819972 out.go:270] Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0407 13:51:41.268664 1819972 out.go:270] Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
W0407 13:51:41.268672 1819972 out.go:270] Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0407 13:51:41.268683 1819972 out.go:358] Setting ErrFile to fd 2...
I0407 13:51:41.268688 1819972 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 13:51:41.988687 1834508 cli_runner.go:164] Run: docker container inspect embed-certs-690840 --format={{.State.Running}}
I0407 13:51:42.015807 1834508 cli_runner.go:164] Run: docker container inspect embed-certs-690840 --format={{.State.Status}}
I0407 13:51:42.044166 1834508 cli_runner.go:164] Run: docker exec embed-certs-690840 stat /var/lib/dpkg/alternatives/iptables
I0407 13:51:42.113468 1834508 oci.go:144] the created container "embed-certs-690840" has a running status.
I0407 13:51:42.113524 1834508 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa...
I0407 13:51:42.689651 1834508 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0407 13:51:42.726679 1834508 cli_runner.go:164] Run: docker container inspect embed-certs-690840 --format={{.State.Status}}
I0407 13:51:42.753377 1834508 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0407 13:51:42.753400 1834508 kic_runner.go:114] Args: [docker exec --privileged embed-certs-690840 chown docker:docker /home/docker/.ssh/authorized_keys]
I0407 13:51:42.836637 1834508 cli_runner.go:164] Run: docker container inspect embed-certs-690840 --format={{.State.Status}}
I0407 13:51:42.860749 1834508 machine.go:93] provisionDockerMachine start ...
I0407 13:51:42.860864 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:42.893205 1834508 main.go:141] libmachine: Using SSH client type: native
I0407 13:51:42.893552 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34621 <nil> <nil>}
I0407 13:51:42.893578 1834508 main.go:141] libmachine: About to run SSH command:
hostname
I0407 13:51:42.894239 1834508 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51458->127.0.0.1:34621: read: connection reset by peer
I0407 13:51:46.023245 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-690840
I0407 13:51:46.023277 1834508 ubuntu.go:169] provisioning hostname "embed-certs-690840"
I0407 13:51:46.023362 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:46.041782 1834508 main.go:141] libmachine: Using SSH client type: native
I0407 13:51:46.042108 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34621 <nil> <nil>}
I0407 13:51:46.042125 1834508 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-690840 && echo "embed-certs-690840" | sudo tee /etc/hostname
I0407 13:51:46.177786 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-690840
I0407 13:51:46.177901 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:46.196245 1834508 main.go:141] libmachine: Using SSH client type: native
I0407 13:51:46.196563 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34621 <nil> <nil>}
I0407 13:51:46.196585 1834508 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-690840' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-690840/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-690840' | sudo tee -a /etc/hosts;
fi
fi
I0407 13:51:46.319428 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0407 13:51:46.319527 1834508 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20598-1489638/.minikube CaCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20598-1489638/.minikube}
I0407 13:51:46.319558 1834508 ubuntu.go:177] setting up certificates
I0407 13:51:46.319567 1834508 provision.go:84] configureAuth start
I0407 13:51:46.319634 1834508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-690840
I0407 13:51:46.336598 1834508 provision.go:143] copyHostCerts
I0407 13:51:46.336668 1834508 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem, removing ...
I0407 13:51:46.336680 1834508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem
I0407 13:51:46.336757 1834508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.pem (1082 bytes)
I0407 13:51:46.336852 1834508 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem, removing ...
I0407 13:51:46.336862 1834508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem
I0407 13:51:46.336888 1834508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/cert.pem (1123 bytes)
I0407 13:51:46.336956 1834508 exec_runner.go:144] found /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem, removing ...
I0407 13:51:46.336966 1834508 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem
I0407 13:51:46.336990 1834508 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20598-1489638/.minikube/key.pem (1675 bytes)
I0407 13:51:46.337044 1834508 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca-key.pem org=jenkins.embed-certs-690840 san=[127.0.0.1 192.168.85.2 embed-certs-690840 localhost minikube]
I0407 13:51:46.744624 1834508 provision.go:177] copyRemoteCerts
I0407 13:51:46.744705 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0407 13:51:46.744749 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:46.773496 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
I0407 13:51:46.865119 1834508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0407 13:51:46.889757 1834508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I0407 13:51:46.914203 1834508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0407 13:51:46.938174 1834508 provision.go:87] duration metric: took 618.592761ms to configureAuth
I0407 13:51:46.938199 1834508 ubuntu.go:193] setting minikube options for container-runtime
I0407 13:51:46.938379 1834508 config.go:182] Loaded profile config "embed-certs-690840": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 13:51:46.938438 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:46.955841 1834508 main.go:141] libmachine: Using SSH client type: native
I0407 13:51:46.956220 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34621 <nil> <nil>}
I0407 13:51:46.956237 1834508 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0407 13:51:47.080060 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
I0407 13:51:47.080083 1834508 ubuntu.go:71] root file system type: overlay
I0407 13:51:47.080187 1834508 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0407 13:51:47.080255 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:47.097482 1834508 main.go:141] libmachine: Using SSH client type: native
I0407 13:51:47.097789 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34621 <nil> <nil>}
I0407 13:51:47.097879 1834508 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0407 13:51:47.237844 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60
[Service]
Type=notify
Restart=on-failure
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I0407 13:51:47.237935 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:47.256513 1834508 main.go:141] libmachine: Using SSH client type: native
I0407 13:51:47.256826 1834508 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e66c0] 0x3e8e80 <nil> [] 0s} 127.0.0.1 34621 <nil> <nil>}
I0407 13:51:47.256849 1834508 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0407 13:51:48.123126 1834508 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service 2025-03-25 15:05:41.000000000 +0000
+++ /lib/systemd/system/docker.service.new 2025-04-07 13:51:47.233409961 +0000
@@ -1,46 +1,49 @@
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
-After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
-Wants=network-online.target containerd.service
+BindsTo=containerd.service
+After=network-online.target firewalld.service containerd.service
+Wants=network-online.target
Requires=docker.socket
+StartLimitBurst=3
+StartLimitIntervalSec=60
[Service]
Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutStartSec=0
-RestartSec=2
-Restart=always
+Restart=on-failure
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
+ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
+LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
TasksMax=infinity
+TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
-OOMScoreAdjust=-500
[Install]
WantedBy=multi-user.target
Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable docker
I0407 13:51:48.123158 1834508 machine.go:96] duration metric: took 5.262387408s to provisionDockerMachine
I0407 13:51:48.123171 1834508 client.go:171] duration metric: took 11.03528956s to LocalClient.Create
I0407 13:51:48.123185 1834508 start.go:167] duration metric: took 11.035352576s to libmachine.API.Create "embed-certs-690840"
I0407 13:51:48.123192 1834508 start.go:293] postStartSetup for "embed-certs-690840" (driver="docker")
I0407 13:51:48.123203 1834508 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0407 13:51:48.123268 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0407 13:51:48.123313 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:48.140892 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
I0407 13:51:48.232828 1834508 ssh_runner.go:195] Run: cat /etc/os-release
I0407 13:51:48.236214 1834508 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0407 13:51:48.236247 1834508 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0407 13:51:48.236258 1834508 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0407 13:51:48.236266 1834508 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I0407 13:51:48.236276 1834508 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1489638/.minikube/addons for local assets ...
I0407 13:51:48.236331 1834508 filesync.go:126] Scanning /home/jenkins/minikube-integration/20598-1489638/.minikube/files for local assets ...
I0407 13:51:48.236431 1834508 filesync.go:149] local asset: /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem -> 14950262.pem in /etc/ssl/certs
I0407 13:51:48.236533 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0407 13:51:48.245781 1834508 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20598-1489638/.minikube/files/etc/ssl/certs/14950262.pem --> /etc/ssl/certs/14950262.pem (1708 bytes)
I0407 13:51:48.277455 1834508 start.go:296] duration metric: took 154.248187ms for postStartSetup
I0407 13:51:48.277832 1834508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-690840
I0407 13:51:48.299718 1834508 profile.go:143] Saving config to /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/config.json ...
I0407 13:51:48.299995 1834508 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0407 13:51:48.300052 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:48.316255 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
I0407 13:51:48.400391 1834508 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0407 13:51:48.404656 1834508 start.go:128] duration metric: took 11.320403511s to createHost
I0407 13:51:48.404676 1834508 start.go:83] releasing machines lock for "embed-certs-690840", held for 11.320533834s
I0407 13:51:48.404752 1834508 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-690840
I0407 13:51:48.421749 1834508 ssh_runner.go:195] Run: cat /version.json
I0407 13:51:48.421797 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:48.422088 1834508 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0407 13:51:48.422135 1834508 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-690840
I0407 13:51:48.441200 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
I0407 13:51:48.451604 1834508 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34621 SSHKeyPath:/home/jenkins/minikube-integration/20598-1489638/.minikube/machines/embed-certs-690840/id_rsa Username:docker}
I0407 13:51:48.531051 1834508 ssh_runner.go:195] Run: systemctl --version
I0407 13:51:48.672155 1834508 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0407 13:51:48.676900 1834508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0407 13:51:48.703443 1834508 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0407 13:51:48.703597 1834508 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0407 13:51:48.735991 1834508 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0407 13:51:48.736028 1834508 start.go:495] detecting cgroup driver to use...
I0407 13:51:48.736077 1834508 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 13:51:48.736192 1834508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 13:51:48.752678 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I0407 13:51:48.762269 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0407 13:51:48.772369 1834508 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0407 13:51:48.772439 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0407 13:51:48.788169 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 13:51:48.800333 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0407 13:51:48.810648 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0407 13:51:48.821935 1834508 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0407 13:51:48.831226 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0407 13:51:48.841722 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0407 13:51:48.852827 1834508 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0407 13:51:48.862735 1834508 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0407 13:51:48.872235 1834508 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0407 13:51:48.881115 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:51:48.963257 1834508 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0407 13:51:49.059790 1834508 start.go:495] detecting cgroup driver to use...
I0407 13:51:49.059890 1834508 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0407 13:51:49.059974 1834508 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0407 13:51:49.075333 1834508 cruntime.go:279] skipping containerd shutdown because we are bound to it
I0407 13:51:49.075454 1834508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0407 13:51:49.090407 1834508 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0407 13:51:49.109626 1834508 ssh_runner.go:195] Run: which cri-dockerd
I0407 13:51:49.114302 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0407 13:51:49.124543 1834508 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
I0407 13:51:49.152866 1834508 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0407 13:51:49.271842 1834508 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0407 13:51:49.380445 1834508 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0407 13:51:49.380625 1834508 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0407 13:51:49.405213 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:51:49.528544 1834508 ssh_runner.go:195] Run: sudo systemctl restart docker
I0407 13:51:49.904057 1834508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0407 13:51:49.915785 1834508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0407 13:51:49.928116 1834508 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0407 13:51:50.018287 1834508 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0407 13:51:50.107109 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:51:50.202681 1834508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0407 13:51:50.222968 1834508 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0407 13:51:50.236078 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:51:50.330088 1834508 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0407 13:51:50.408194 1834508 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0407 13:51:50.408281 1834508 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0407 13:51:50.413339 1834508 start.go:563] Will wait 60s for crictl version
I0407 13:51:50.413400 1834508 ssh_runner.go:195] Run: which crictl
I0407 13:51:50.417311 1834508 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0407 13:51:50.461468 1834508 start.go:579] Version: 0.1.0
RuntimeName: docker
RuntimeVersion: 28.0.4
RuntimeApiVersion: v1
I0407 13:51:50.461554 1834508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0407 13:51:50.490147 1834508 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0407 13:51:51.270356 1819972 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0407 13:51:51.280041 1819972 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0407 13:51:51.283661 1819972 out.go:201]
W0407 13:51:51.286521 1819972 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0407 13:51:51.286680 1819972 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0407 13:51:51.286738 1819972 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0407 13:51:51.286781 1819972 out.go:270] *
W0407 13:51:51.287792 1819972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0407 13:51:51.291196 1819972 out.go:201]
I0407 13:51:50.521765 1834508 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 28.0.4 ...
I0407 13:51:50.521892 1834508 cli_runner.go:164] Run: docker network inspect embed-certs-690840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0407 13:51:50.538490 1834508 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0407 13:51:50.542589 1834508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 13:51:50.554334 1834508 kubeadm.go:883] updating cluster {Name:embed-certs-690840 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:embed-certs-690840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0407 13:51:50.554453 1834508 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 13:51:50.554512 1834508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0407 13:51:50.575863 1834508 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0407 13:51:50.575887 1834508 docker.go:619] Images already preloaded, skipping extraction
I0407 13:51:50.575950 1834508 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
I0407 13:51:50.602011 1834508 docker.go:689] Got preloaded images: -- stdout --
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/coredns/coredns:v1.11.3
registry.k8s.io/pause:3.10
gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout --
I0407 13:51:50.602038 1834508 cache_images.go:84] Images are preloaded, skipping loading
I0407 13:51:50.602047 1834508 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.32.2 docker true true} ...
I0407 13:51:50.602174 1834508 kubeadm.go:946] kubelet [Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-690840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.32.2 ClusterName:embed-certs-690840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0407 13:51:50.602250 1834508 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
I0407 13:51:50.675069 1834508 cni.go:84] Creating CNI manager for ""
I0407 13:51:50.675097 1834508 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0407 13:51:50.675110 1834508 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0407 13:51:50.675130 1834508 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-690840 NodeName:embed-certs-690840 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0407 13:51:50.675272 1834508 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
name: "embed-certs-690840"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.32.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0407 13:51:50.675344 1834508 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
I0407 13:51:50.685885 1834508 binaries.go:44] Found k8s binaries, skipping transfer
I0407 13:51:50.685991 1834508 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0407 13:51:50.694683 1834508 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
I0407 13:51:50.718310 1834508 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0407 13:51:50.742122 1834508 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
I0407 13:51:50.761102 1834508 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0407 13:51:50.764634 1834508 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0407 13:51:50.775383 1834508 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0407 13:51:50.878393 1834508 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0407 13:51:50.894081 1834508 certs.go:68] Setting up /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840 for IP: 192.168.85.2
I0407 13:51:50.894192 1834508 certs.go:194] generating shared ca certs ...
I0407 13:51:50.894225 1834508 certs.go:226] acquiring lock for ca certs: {Name:mk03ca927c02de3344f72431a7d9f1cc9d84ee54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0407 13:51:50.894467 1834508 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/ca.key
I0407 13:51:50.894540 1834508 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20598-1489638/.minikube/proxy-client-ca.key
I0407 13:51:50.894563 1834508 certs.go:256] generating profile certs ...
I0407 13:51:50.894641 1834508 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/client.key
I0407 13:51:50.894684 1834508 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20598-1489638/.minikube/profiles/embed-certs-690840/client.crt with IP's: []
==> Docker <==
Apr 07 13:46:34 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:34.076242216Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Apr 07 13:46:40 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:40.646328871Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:46:40 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:40.646379193Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:46:40 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:40.649982240Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:46:59 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:46:59.910801577Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:47:00 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:00.123348215Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:47:00 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:00.123812512Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:47:00 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:00.123979807Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Apr 07 13:47:34 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:34.645147638Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:47:34 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:34.645212795Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:47:34 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:34.648327922Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:47:42 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:42.870300316Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:47:43 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:43.175025072Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:47:43 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:43.175257704Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:47:43 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:47:43.175570157Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Apr 07 13:49:05 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:05.653125401Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:49:05 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:05.653568340Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:49:05 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:05.656297317Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:49:17 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:17.878074972Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:49:18 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:18.076831734Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:49:18 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:18.076967382Z" level=warning msg="[DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
Apr 07 13:49:18 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:49:18.076995525Z" level=info msg="Attempting next endpoint for pull after error: [DEPRECATION NOTICE] Docker Image Format v1 and Docker Image manifest version 2, schema 1 support is disabled by default and will be removed in an upcoming release. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
Apr 07 13:51:48 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:51:48.655679304Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:51:48 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:51:48.655723932Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Apr 07 13:51:48 old-k8s-version-169187 dockerd[1139]: time="2025-04-07T13:51:48.658294715Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
55bf8eb1ab94e ba04bb24b9575 5 minutes ago Running storage-provisioner 2 14d2f02ae8bd4 storage-provisioner
c66d59ac00e0b kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 5 minutes ago Running kubernetes-dashboard 0 799d3dcff5b2c kubernetes-dashboard-cd95d586-jg6t2
062895b6a45a0 25a5233254979 5 minutes ago Running kube-proxy 1 d84281d3791ed kube-proxy-d8l5m
a2086baae207e db91994f4ee8f 5 minutes ago Running coredns 1 aa9b9941055fa coredns-74ff55c5b-zpflr
fcbefe8497a0e ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 14d2f02ae8bd4 storage-provisioner
1a93436156b67 1611cd07b61d5 5 minutes ago Running busybox 1 049954938d6d1 busybox
b45737d73f96c 05b738aa1bc63 6 minutes ago Running etcd 1 3ff441c73916e etcd-old-k8s-version-169187
82525be035b3a 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 7a5f6d532d87d kube-apiserver-old-k8s-version-169187
c2da54d5c2562 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 53776d315032b kube-controller-manager-old-k8s-version-169187
fce53c7f2eb00 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 c864d4867092a kube-scheduler-old-k8s-version-169187
d784bb64a479f gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e 6 minutes ago Exited busybox 0 9a5e41c91a18a busybox
7cb4581969c6d 25a5233254979 7 minutes ago Exited kube-proxy 0 efd72198fd173 kube-proxy-d8l5m
d921178449970 db91994f4ee8f 7 minutes ago Exited coredns 0 1d46ce11bd830 coredns-74ff55c5b-zpflr
3a9781764312d e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 d67f1517233cd kube-scheduler-old-k8s-version-169187
3e48a853c03b2 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 b7c50e47771f9 kube-controller-manager-old-k8s-version-169187
78f8992ce8b47 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 4d27dc9900160 kube-apiserver-old-k8s-version-169187
f4fcf1ba0dcec 05b738aa1bc63 8 minutes ago Exited etcd 0 f9cd1cb383006 etcd-old-k8s-version-169187
==> coredns [a2086baae207] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:42712 - 40788 "HINFO IN 4832343868683583306.6690640880769357123. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012427877s
==> coredns [d92117844997] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] Reloading
[INFO] plugin/health: Going into lameduck mode for 5s
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
[INFO] Reloading complete
[INFO] 127.0.0.1:37029 - 57083 "HINFO IN 3878060877044781526.7005613683257069956. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006985932s
[INFO] SIGTERM: Shutting down servers then terminating
[INFO] plugin/health: Going into lameduck mode for 5s
I0407 13:44:29.262142 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 13:43:59.261467255 +0000 UTC m=+0.077365174) (total time: 30.0005641s):
Trace[2019727887]: [30.0005641s] [30.0005641s] END
E0407 13:44:29.262410 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0407 13:44:29.262609 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 13:43:59.262141078 +0000 UTC m=+0.078038997) (total time: 30.00044954s):
Trace[939984059]: [30.00044954s] [30.00044954s] END
E0407 13:44:29.262631 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0407 13:44:29.262903 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2025-04-07 13:43:59.262377854 +0000 UTC m=+0.078275781) (total time: 30.000509265s):
Trace[911902081]: [30.000509265s] [30.000509265s] END
E0407 13:44:29.262918 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
E0407 13:45:26.200752 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=567&timeout=6m5s&timeoutSeconds=365&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
E0407 13:45:26.200799 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=577&timeout=8m23s&timeoutSeconds=503&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
E0407 13:45:26.200828 1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=200&timeout=6m27s&timeoutSeconds=387&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
==> describe nodes <==
Name: old-k8s-version-169187
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-169187
kubernetes.io/os=linux
minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
minikube.k8s.io/name=old-k8s-version-169187
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2025_04_07T13_43_42_0700
minikube.k8s.io/version=v1.35.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 07 Apr 2025 13:43:39 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-169187
AcquireTime: <unset>
RenewTime: Mon, 07 Apr 2025 13:51:52 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 07 Apr 2025 13:51:52 +0000 Mon, 07 Apr 2025 13:43:32 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 07 Apr 2025 13:51:52 +0000 Mon, 07 Apr 2025 13:43:32 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 07 Apr 2025 13:51:52 +0000 Mon, 07 Apr 2025 13:43:32 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Apr 2025 13:51:52 +0000 Mon, 07 Apr 2025 13:43:56 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-169187
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022308Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022308Ki
pods: 110
System Info:
Machine ID: cad61de569f2475aba10f198e008898b
System UUID: 6c0f61c9-c57f-493d-acd2-69f3cc3403e1
Boot ID: 234d79b0-ee5b-4f69-ac54-5d0498b7c1e5
Kernel Version: 5.15.0-1081-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://28.0.4
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m40s
kube-system coredns-74ff55c5b-zpflr 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 7m55s
kube-system etcd-old-k8s-version-169187 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m7s
kube-system kube-apiserver-old-k8s-version-169187 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m7s
kube-system kube-controller-manager-old-k8s-version-169187 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m7s
kube-system kube-proxy-d8l5m 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m55s
kube-system kube-scheduler-old-k8s-version-169187 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m7s
kube-system metrics-server-9975d5f86-7rkcc 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m27s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m53s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-8v7k4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
kubernetes-dashboard kubernetes-dashboard-cd95d586-jg6t2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m35s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (42%) 0 (0%)
memory 370Mi (4%) 170Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m22s (x5 over 8m22s) kubelet Node old-k8s-version-169187 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m22s (x5 over 8m22s) kubelet Node old-k8s-version-169187 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m22s (x4 over 8m22s) kubelet Node old-k8s-version-169187 status is now: NodeHasSufficientPID
Normal Starting 8m7s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m7s kubelet Node old-k8s-version-169187 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m7s kubelet Node old-k8s-version-169187 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m7s kubelet Node old-k8s-version-169187 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m7s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 7m57s kubelet Node old-k8s-version-169187 status is now: NodeReady
Normal Starting 7m54s kube-proxy Starting kube-proxy.
Normal Starting 6m4s kubelet Starting kubelet.
Normal NodeAllocatableEnforced 6m4s kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 6m3s (x8 over 6m4s) kubelet Node old-k8s-version-169187 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m3s (x8 over 6m4s) kubelet Node old-k8s-version-169187 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m3s (x7 over 6m4s) kubelet Node old-k8s-version-169187 status is now: NodeHasSufficientPID
Normal Starting 5m51s kube-proxy Starting kube-proxy.
==> dmesg <==
[Apr 7 13:06] systemd-journald[222]: Failed to send stream file descriptor to service manager: Connection refused
==> etcd [b45737d73f96] <==
2025-04-07 13:47:48.819323 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:47:58.819319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:48:08.820510 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:48:18.819229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:48:28.819254 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:48:38.819317 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:48:48.819133 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:48:58.819349 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:49:08.819226 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:49:18.819246 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:49:28.819361 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:49:38.819190 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:49:48.819449 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:49:58.819406 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:50:08.819207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:50:18.819665 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:50:28.819229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:50:38.819362 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:50:48.819186 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:50:58.819392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:51:08.819248 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:51:18.819270 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:51:28.819220 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:51:38.819322 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:51:48.819307 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [f4fcf1ba0dce] <==
raft2025/04/07 13:43:32 INFO: ea7e25599daad906 became leader at term 2
raft2025/04/07 13:43:32 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2025-04-07 13:43:32.402201 I | etcdserver: setting up the initial cluster version to 3.4
2025-04-07 13:43:32.403302 N | etcdserver/membership: set the initial cluster version to 3.4
2025-04-07 13:43:32.403465 I | etcdserver/api: enabled capabilities for version 3.4
2025-04-07 13:43:32.403574 I | etcdserver: published {Name:old-k8s-version-169187 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2025-04-07 13:43:32.403807 I | embed: ready to serve client requests
2025-04-07 13:43:32.405178 I | embed: serving client requests on 192.168.76.2:2379
2025-04-07 13:43:32.410946 I | embed: ready to serve client requests
2025-04-07 13:43:32.412724 I | embed: serving client requests on 127.0.0.1:2379
2025-04-07 13:43:46.633469 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:43:47.369969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:43:57.369938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:44:07.369745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:44:17.369850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:44:27.369806 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:44:37.369904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:44:47.369787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:44:57.369872 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:45:07.370072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:45:17.369718 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2025-04-07 13:45:26.289935 N | pkg/osutil: received terminated signal, shutting down...
WARNING: 2025/04/07 13:45:26 grpc: addrConn.createTransport failed to connect to {192.168.76.2:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.76.2:2379: connect: connection refused". Reconnecting...
WARNING: 2025/04/07 13:45:26 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2025-04-07 13:45:26.354672 I | etcdserver: skipped leadership transfer for single voting member cluster
==> kernel <==
13:51:53 up 7:34, 0 users, load average: 1.22, 1.90, 2.75
Linux old-k8s-version-169187 5.15.0-1081-aws #88~20.04.1-Ubuntu SMP Fri Mar 28 14:48:25 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kube-apiserver [78f8992ce8b4] <==
W0407 13:45:26.331063 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.331098 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0407 13:45:26.331448 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0407 13:45:26.331578 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0407 13:45:26.332669 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0407 13:45:26.332760 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0407 13:45:26.332906 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0407 13:45:26.332994 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0407 13:45:26.333076 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333115 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333150 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333177 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333210 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333239 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333268 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333297 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333326 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333353 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333379 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333406 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333437 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0407 13:45:26.333472 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0407 13:45:26.333520 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
I0407 13:45:26.344355 1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
W0407 13:45:26.344535 1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
==> kube-apiserver [82525be035b3] <==
I0407 13:48:22.144316 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:48:22.144326 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:48:56.794902 1 client.go:360] parsed scheme: "passthrough"
I0407 13:48:56.794960 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:48:56.795058 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0407 13:49:02.902880 1 handler_proxy.go:102] no RequestInfo found in the context
E0407 13:49:02.902955 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0407 13:49:02.902963 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0407 13:49:37.482447 1 client.go:360] parsed scheme: "passthrough"
I0407 13:49:37.482489 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:49:37.482498 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:50:16.293930 1 client.go:360] parsed scheme: "passthrough"
I0407 13:50:16.293978 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:50:16.293987 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0407 13:50:46.894985 1 client.go:360] parsed scheme: "passthrough"
I0407 13:50:46.895029 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:50:46.895255 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0407 13:51:00.939757 1 handler_proxy.go:102] no RequestInfo found in the context
E0407 13:51:00.939984 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0407 13:51:00.940002 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0407 13:51:25.878254 1 client.go:360] parsed scheme: "passthrough"
I0407 13:51:25.878300 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0407 13:51:25.878310 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [3e48a853c03b] <==
I0407 13:43:58.134344 1 disruption.go:339] Sending events to api server.
I0407 13:43:58.148473 1 shared_informer.go:247] Caches are synced for ReplicationController
I0407 13:43:58.149348 1 shared_informer.go:247] Caches are synced for endpoint_slice
I0407 13:43:58.153996 1 shared_informer.go:247] Caches are synced for resource quota
I0407 13:43:58.155822 1 shared_informer.go:247] Caches are synced for stateful set
I0407 13:43:58.162693 1 shared_informer.go:247] Caches are synced for daemon sets
I0407 13:43:58.166762 1 shared_informer.go:247] Caches are synced for deployment
I0407 13:43:58.167375 1 shared_informer.go:247] Caches are synced for endpoint
I0407 13:43:58.184364 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
I0407 13:43:58.191932 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d8l5m"
I0407 13:43:58.205378 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-gjv5d"
I0407 13:43:58.240557 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-zpflr"
I0407 13:43:58.299069 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0407 13:43:58.548057 1 shared_informer.go:247] Caches are synced for garbage collector
I0407 13:43:58.548095 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0407 13:43:58.599557 1 shared_informer.go:247] Caches are synced for garbage collector
I0407 13:44:00.079728 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0407 13:44:00.120484 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-gjv5d"
I0407 13:45:24.973844 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E0407 13:45:25.162165 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0407 13:45:26.097603 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-7rkcc"
E0407 13:45:26.189580 1 request.go:1011] Unexpected error when reading response body: unexpected EOF
W0407 13:45:26.189654 1 endpointslice_controller.go:284] Error syncing endpoint slices for service "kube-system/metrics-server", retrying. Error: failed to update metrics-server-trdqs EndpointSlice for Service kube-system/metrics-server: unexpected error when reading response body. Please retry. Original error: unexpected EOF
I0407 13:45:26.189885 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Service" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpointSlices" message="Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-trdqs EndpointSlice for Service kube-system/metrics-server: unexpected error when reading response body. Please retry. Original error: unexpected EOF"
E0407 13:45:26.190032 1 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"metrics-server.18340d4074e7de8d", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"metrics-server", UID:"941a048e-4b42-4187-9153-b8fd69fcbf95", APIVersion:"v1", ResourceVersion:"567", FieldPath:""}, Reason:"FailedToUpdateEndpointSlices", Message:"Error updating Endpoint Slices for Service kube-system/metrics-server: failed to update metrics-server-trdqs EndpointSlice f
or Service kube-system/metrics-server: unexpected error when reading response body. Please retry. Original error: unexpected EOF", Source:v1.EventSource{Component:"endpoint-slice-controller", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1f5139d8b4dc28d, ext:114096331347, loc:(*time.Location)(0x632eb80)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1f5139d8b4dc28d, ext:114096331347, loc:(*time.Location)(0x632eb80)}}, Count:1, Type:"Warning", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.76.2:8443/api/v1/namespaces/kube-system/events": dial tcp 192.168.76.2:8443: connect: connection refused'(may retry after sleeping)
==> kube-controller-manager [c2da54d5c256] <==
W0407 13:47:24.559664 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:47:50.486473 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:47:56.210215 1 request.go:655] Throttling request took 1.048375743s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
W0407 13:47:57.061756 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:48:20.988246 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:48:28.712194 1 request.go:655] Throttling request took 1.048493439s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W0407 13:48:29.563626 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:48:51.489981 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:49:01.214078 1 request.go:655] Throttling request took 1.043827034s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0407 13:49:02.065575 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:49:21.991838 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:49:33.716012 1 request.go:655] Throttling request took 1.048540801s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1?timeout=32s
W0407 13:49:34.567639 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:49:52.498121 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:50:06.218073 1 request.go:655] Throttling request took 1.048009912s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
W0407 13:50:07.069534 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:50:22.999946 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:50:38.720073 1 request.go:655] Throttling request took 1.048132225s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0407 13:50:39.572809 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:50:53.501680 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:51:11.223207 1 request.go:655] Throttling request took 1.048304111s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0407 13:51:12.074821 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0407 13:51:24.007369 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0407 13:51:43.725188 1 request.go:655] Throttling request took 1.048272481s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0407 13:51:44.577023 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-proxy [062895b6a45a] <==
I0407 13:46:02.567998 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0407 13:46:02.568091 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0407 13:46:02.595809 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0407 13:46:02.595909 1 server_others.go:185] Using iptables Proxier.
I0407 13:46:02.596121 1 server.go:650] Version: v1.20.0
I0407 13:46:02.597183 1 config.go:315] Starting service config controller
I0407 13:46:02.597200 1 shared_informer.go:240] Waiting for caches to sync for service config
I0407 13:46:02.597219 1 config.go:224] Starting endpoint slice config controller
I0407 13:46:02.597223 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0407 13:46:02.697332 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0407 13:46:02.697332 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [7cb4581969c6] <==
I0407 13:43:59.939460 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0407 13:43:59.939625 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0407 13:43:59.976384 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0407 13:43:59.976505 1 server_others.go:185] Using iptables Proxier.
I0407 13:43:59.977026 1 server.go:650] Version: v1.20.0
I0407 13:43:59.982108 1 config.go:315] Starting service config controller
I0407 13:43:59.982134 1 shared_informer.go:240] Waiting for caches to sync for service config
I0407 13:43:59.982167 1 config.go:224] Starting endpoint slice config controller
I0407 13:43:59.982171 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0407 13:44:00.090757 1 shared_informer.go:247] Caches are synced for endpoint slice config
I0407 13:44:00.090824 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [3a9781764312] <==
I0407 13:43:34.657699 1 serving.go:331] Generated self-signed cert in-memory
W0407 13:43:39.470613 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0407 13:43:39.470843 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0407 13:43:39.470931 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0407 13:43:39.471009 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0407 13:43:39.548348 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0407 13:43:39.549511 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 13:43:39.549531 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 13:43:39.549546 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0407 13:43:39.558399 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0407 13:43:39.563243 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0407 13:43:39.563442 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0407 13:43:39.593783 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0407 13:43:39.598129 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0407 13:43:39.598220 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0407 13:43:39.598287 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0407 13:43:39.598377 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0407 13:43:39.598447 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0407 13:43:39.598512 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0407 13:43:39.598578 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0407 13:43:39.600804 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0407 13:43:40.551205 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0407 13:43:40.568065 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0407 13:43:41.049592 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [fce53c7f2eb0] <==
I0407 13:45:55.391148 1 serving.go:331] Generated self-signed cert in-memory
W0407 13:45:59.900496 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0407 13:45:59.900612 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0407 13:45:59.900642 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0407 13:45:59.900736 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0407 13:46:00.240667 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0407 13:46:00.263592 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0407 13:46:00.270095 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 13:46:00.270130 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0407 13:46:00.373017 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Apr 07 13:49:29 old-k8s-version-169187 kubelet[1477]: E0407 13:49:29.638278 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:49:30 old-k8s-version-169187 kubelet[1477]: E0407 13:49:30.638576 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:49:41 old-k8s-version-169187 kubelet[1477]: E0407 13:49:41.654838 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:49:43 old-k8s-version-169187 kubelet[1477]: E0407 13:49:43.636263 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:49:56 old-k8s-version-169187 kubelet[1477]: E0407 13:49:56.633358 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:49:57 old-k8s-version-169187 kubelet[1477]: E0407 13:49:57.633386 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:07 old-k8s-version-169187 kubelet[1477]: E0407 13:50:07.638473 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:12 old-k8s-version-169187 kubelet[1477]: E0407 13:50:12.633385 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:22 old-k8s-version-169187 kubelet[1477]: E0407 13:50:22.633440 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:25 old-k8s-version-169187 kubelet[1477]: E0407 13:50:25.633558 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:37 old-k8s-version-169187 kubelet[1477]: E0407 13:50:37.636391 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:39 old-k8s-version-169187 kubelet[1477]: E0407 13:50:39.634229 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.633542 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:50:51 old-k8s-version-169187 kubelet[1477]: E0407 13:50:51.643349 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:03 old-k8s-version-169187 kubelet[1477]: E0407 13:51:03.636916 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:06 old-k8s-version-169187 kubelet[1477]: E0407 13:51:06.633346 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:17 old-k8s-version-169187 kubelet[1477]: E0407 13:51:17.638234 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:19 old-k8s-version-169187 kubelet[1477]: E0407 13:51:19.643952 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:32 old-k8s-version-169187 kubelet[1477]: E0407 13:51:32.635754 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:33 old-k8s-version-169187 kubelet[1477]: E0407 13:51:33.635745 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:45 old-k8s-version-169187 kubelet[1477]: E0407 13:51:45.633252 1477 pod_workers.go:191] Error syncing pod 6cd1b33d-080d-4146-ad71-02fe522a5756 ("dashboard-metrics-scraper-8d5bb5db8-8v7k4_kubernetes-dashboard(6cd1b33d-080d-4146-ad71-02fe522a5756)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with ImagePullBackOff: "Back-off pulling image \"registry.k8s.io/echoserver:1.4\""
Apr 07 13:51:48 old-k8s-version-169187 kubelet[1477]: E0407 13:51:48.658846 1477 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 07 13:51:48 old-k8s-version-169187 kubelet[1477]: E0407 13:51:48.658894 1477 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 07 13:51:48 old-k8s-version-169187 kubelet[1477]: E0407 13:51:48.659031 1477 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-wkm2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-7rkcc_kube-system(62bc6e
bf-ed95-4175-9e2f-520cf4f10843): ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Apr 07 13:51:48 old-k8s-version-169187 kubelet[1477]: E0407 13:51:48.659062 1477 pod_workers.go:191] Error syncing pod 62bc6ebf-ed95-4175-9e2f-520cf4f10843 ("metrics-server-9975d5f86-7rkcc_kube-system(62bc6ebf-ed95-4175-9e2f-520cf4f10843)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
==> kubernetes-dashboard [c66d59ac00e0] <==
2025/04/07 13:46:25 Using namespace: kubernetes-dashboard
2025/04/07 13:46:25 Using in-cluster config to connect to apiserver
2025/04/07 13:46:25 Using secret token for csrf signing
2025/04/07 13:46:25 Initializing csrf token from kubernetes-dashboard-csrf secret
2025/04/07 13:46:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2025/04/07 13:46:25 Successful initial request to the apiserver, version: v1.20.0
2025/04/07 13:46:25 Generating JWE encryption key
2025/04/07 13:46:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2025/04/07 13:46:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2025/04/07 13:46:26 Initializing JWE encryption key from synchronized object
2025/04/07 13:46:26 Creating in-cluster Sidecar client
2025/04/07 13:46:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:46:26 Serving insecurely on HTTP port: 9090
2025/04/07 13:46:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:47:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:47:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:48:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:48:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:49:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:49:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:50:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:50:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:51:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2025/04/07 13:46:25 Starting overwatch
==> storage-provisioner [55bf8eb1ab94] <==
I0407 13:46:46.771222 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0407 13:46:46.803835 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0407 13:46:46.804116 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0407 13:47:04.291868 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0407 13:47:04.294757 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169187_68cc6a41-f96c-4ad4-b042-5afe60562cec!
I0407 13:47:04.297213 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d9afc515-4e20-4942-a68b-b86c816b4262", APIVersion:"v1", ResourceVersion:"799", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-169187_68cc6a41-f96c-4ad4-b042-5afe60562cec became leader
I0407 13:47:04.396153 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-169187_68cc6a41-f96c-4ad4-b042-5afe60562cec!
==> storage-provisioner [fcbefe8497a0] <==
I0407 13:46:02.084656 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0407 13:46:32.087032 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-169187 -n old-k8s-version-169187
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-169187 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-7rkcc dashboard-metrics-scraper-8d5bb5db8-8v7k4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-169187 describe pod metrics-server-9975d5f86-7rkcc dashboard-metrics-scraper-8d5bb5db8-8v7k4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-169187 describe pod metrics-server-9975d5f86-7rkcc dashboard-metrics-scraper-8d5bb5db8-8v7k4: exit status 1 (154.236881ms)
** stderr **
E0407 13:51:55.039105 1837799 memcache.go:287] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
E0407 13:51:55.068175 1837799 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
E0407 13:51:55.073448 1837799 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
E0407 13:51:55.077781 1837799 memcache.go:121] "Unhandled Error" err="couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request"
Error from server (NotFound): pods "metrics-server-9975d5f86-7rkcc" not found
Error from server (NotFound): pods "dashboard-metrics-scraper-8d5bb5db8-8v7k4" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-169187 describe pod metrics-server-9975d5f86-7rkcc dashboard-metrics-scraper-8d5bb5db8-8v7k4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.84s)