=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-674802 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-674802 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m14.629231054s)
-- stdout --
* [old-k8s-version-674802] minikube v1.34.0 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=19876
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-674802" primary control-plane node in "old-k8s-version-674802" cluster
* Pulling base image v0.0.45-1729876044-19868 ...
* Restarting existing docker container for "old-k8s-version-674802" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-674802 addons enable metrics-server
* Enabled addons: metrics-server, dashboard, default-storageclass, storage-provisioner
-- /stdout --
** stderr **
I1028 11:26:42.771937 1522650 out.go:345] Setting OutFile to fd 1 ...
I1028 11:26:42.772148 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:26:42.772170 1522650 out.go:358] Setting ErrFile to fd 2...
I1028 11:26:42.772190 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:26:42.772442 1522650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
I1028 11:26:42.772804 1522650 out.go:352] Setting JSON to false
I1028 11:26:42.773742 1522650 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":148133,"bootTime":1729966670,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1028 11:26:42.773824 1522650 start.go:139] virtualization:
I1028 11:26:42.775903 1522650 out.go:177] * [old-k8s-version-674802] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1028 11:26:42.777844 1522650 out.go:177] - MINIKUBE_LOCATION=19876
I1028 11:26:42.777922 1522650 notify.go:220] Checking for updates...
I1028 11:26:42.780590 1522650 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1028 11:26:42.782162 1522650 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
I1028 11:26:42.783422 1522650 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
I1028 11:26:42.785052 1522650 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1028 11:26:42.786538 1522650 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1028 11:26:42.788272 1522650 config.go:182] Loaded profile config "old-k8s-version-674802": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1028 11:26:42.790388 1522650 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
I1028 11:26:42.791746 1522650 driver.go:394] Setting default libvirt URI to qemu:///system
I1028 11:26:42.826546 1522650 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1028 11:26:42.826668 1522650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1028 11:26:42.909259 1522650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-28 11:26:42.894056944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1028 11:26:42.909364 1522650 docker.go:318] overlay module found
I1028 11:26:42.911725 1522650 out.go:177] * Using the docker driver based on existing profile
I1028 11:26:42.912868 1522650 start.go:297] selected driver: docker
I1028 11:26:42.912881 1522650 start.go:901] validating driver "docker" against &{Name:old-k8s-version-674802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 11:26:42.912995 1522650 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1028 11:26:42.913658 1522650 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1028 11:26:42.982880 1522650 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-28 11:26:42.973864869 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1028 11:26:42.983203 1522650 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1028 11:26:42.983225 1522650 cni.go:84] Creating CNI manager for ""
I1028 11:26:42.983280 1522650 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1028 11:26:42.983316 1522650 start.go:340] cluster config:
{Name:old-k8s-version-674802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 11:26:42.986761 1522650 out.go:177] * Starting "old-k8s-version-674802" primary control-plane node in "old-k8s-version-674802" cluster
I1028 11:26:42.987946 1522650 cache.go:121] Beginning downloading kic base image for docker with containerd
I1028 11:26:42.989288 1522650 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
I1028 11:26:42.990516 1522650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1028 11:26:42.990559 1522650 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I1028 11:26:42.990581 1522650 cache.go:56] Caching tarball of preloaded images
I1028 11:26:42.990655 1522650 preload.go:172] Found /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1028 11:26:42.990664 1522650 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I1028 11:26:42.990772 1522650 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/config.json ...
I1028 11:26:42.990956 1522650 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
I1028 11:26:43.008673 1522650 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
I1028 11:26:43.008692 1522650 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
I1028 11:26:43.008704 1522650 cache.go:194] Successfully downloaded all kic artifacts
I1028 11:26:43.008729 1522650 start.go:360] acquireMachinesLock for old-k8s-version-674802: {Name:mkbd322987ec66edb2ef5f7245f402a1adfd92d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 11:26:43.008781 1522650 start.go:364] duration metric: took 32.296µs to acquireMachinesLock for "old-k8s-version-674802"
I1028 11:26:43.008800 1522650 start.go:96] Skipping create...Using existing machine configuration
I1028 11:26:43.008805 1522650 fix.go:54] fixHost starting:
I1028 11:26:43.009039 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
I1028 11:26:43.030951 1522650 fix.go:112] recreateIfNeeded on old-k8s-version-674802: state=Stopped err=<nil>
W1028 11:26:43.030978 1522650 fix.go:138] unexpected machine state, will restart: <nil>
I1028 11:26:43.034311 1522650 out.go:177] * Restarting existing docker container for "old-k8s-version-674802" ...
I1028 11:26:43.037687 1522650 cli_runner.go:164] Run: docker start old-k8s-version-674802
I1028 11:26:43.426029 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
I1028 11:26:43.457446 1522650 kic.go:430] container "old-k8s-version-674802" state is running.
I1028 11:26:43.457823 1522650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-674802
I1028 11:26:43.486493 1522650 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/config.json ...
I1028 11:26:43.486721 1522650 machine.go:93] provisionDockerMachine start ...
I1028 11:26:43.486788 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:43.523688 1522650 main.go:141] libmachine: Using SSH client type: native
I1028 11:26:43.523989 1522650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 40375 <nil> <nil>}
I1028 11:26:43.524000 1522650 main.go:141] libmachine: About to run SSH command:
hostname
I1028 11:26:43.524530 1522650 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35226->127.0.0.1:40375: read: connection reset by peer
I1028 11:26:46.655020 1522650 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-674802
I1028 11:26:46.655046 1522650 ubuntu.go:169] provisioning hostname "old-k8s-version-674802"
I1028 11:26:46.655112 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:46.681319 1522650 main.go:141] libmachine: Using SSH client type: native
I1028 11:26:46.681565 1522650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 40375 <nil> <nil>}
I1028 11:26:46.681584 1522650 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-674802 && echo "old-k8s-version-674802" | sudo tee /etc/hostname
I1028 11:26:46.820451 1522650 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-674802
I1028 11:26:46.820626 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:46.845262 1522650 main.go:141] libmachine: Using SSH client type: native
I1028 11:26:46.845508 1522650 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 40375 <nil> <nil>}
I1028 11:26:46.845529 1522650 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-674802' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-674802/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-674802' | sudo tee -a /etc/hosts;
fi
fi
I1028 11:26:46.975581 1522650 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1028 11:26:46.975679 1522650 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19876-1313708/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-1313708/.minikube}
I1028 11:26:46.975745 1522650 ubuntu.go:177] setting up certificates
I1028 11:26:46.975770 1522650 provision.go:84] configureAuth start
I1028 11:26:46.975868 1522650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-674802
I1028 11:26:46.996543 1522650 provision.go:143] copyHostCerts
I1028 11:26:46.996611 1522650 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem, removing ...
I1028 11:26:46.996628 1522650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem
I1028 11:26:46.996692 1522650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem (1078 bytes)
I1028 11:26:46.996790 1522650 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem, removing ...
I1028 11:26:46.996801 1522650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem
I1028 11:26:46.996828 1522650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem (1123 bytes)
I1028 11:26:46.996890 1522650 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem, removing ...
I1028 11:26:46.996899 1522650 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem
I1028 11:26:46.996923 1522650 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem (1675 bytes)
I1028 11:26:46.997007 1522650 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-674802 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-674802]
I1028 11:26:47.963184 1522650 provision.go:177] copyRemoteCerts
I1028 11:26:47.963312 1522650 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1028 11:26:47.963394 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:47.989822 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:48.105538 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1028 11:26:48.174450 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I1028 11:26:48.222007 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1028 11:26:48.249562 1522650 provision.go:87] duration metric: took 1.273759551s to configureAuth
I1028 11:26:48.249586 1522650 ubuntu.go:193] setting minikube options for container-runtime
I1028 11:26:48.249779 1522650 config.go:182] Loaded profile config "old-k8s-version-674802": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1028 11:26:48.249786 1522650 machine.go:96] duration metric: took 4.763050927s to provisionDockerMachine
I1028 11:26:48.249795 1522650 start.go:293] postStartSetup for "old-k8s-version-674802" (driver="docker")
I1028 11:26:48.249805 1522650 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1028 11:26:48.249852 1522650 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1028 11:26:48.249893 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:48.284861 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:48.410216 1522650 ssh_runner.go:195] Run: cat /etc/os-release
I1028 11:26:48.413760 1522650 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1028 11:26:48.413794 1522650 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1028 11:26:48.413805 1522650 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1028 11:26:48.413812 1522650 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1028 11:26:48.413823 1522650 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-1313708/.minikube/addons for local assets ...
I1028 11:26:48.413887 1522650 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-1313708/.minikube/files for local assets ...
I1028 11:26:48.413966 1522650 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem -> 13190982.pem in /etc/ssl/certs
I1028 11:26:48.414069 1522650 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1028 11:26:48.425595 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem --> /etc/ssl/certs/13190982.pem (1708 bytes)
I1028 11:26:48.455134 1522650 start.go:296] duration metric: took 205.324048ms for postStartSetup
I1028 11:26:48.455216 1522650 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1028 11:26:48.455258 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:48.499944 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:48.611336 1522650 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1028 11:26:48.616018 1522650 fix.go:56] duration metric: took 5.607204491s for fixHost
I1028 11:26:48.616046 1522650 start.go:83] releasing machines lock for "old-k8s-version-674802", held for 5.607256528s
I1028 11:26:48.616117 1522650 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-674802
I1028 11:26:48.647044 1522650 ssh_runner.go:195] Run: cat /version.json
I1028 11:26:48.647221 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:48.647119 1522650 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1028 11:26:48.647364 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:48.682005 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:48.691790 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:48.799830 1522650 ssh_runner.go:195] Run: systemctl --version
I1028 11:26:48.998974 1522650 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1028 11:26:49.017206 1522650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1028 11:26:49.040056 1522650 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1028 11:26:49.040132 1522650 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1028 11:26:49.050488 1522650 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I1028 11:26:49.050513 1522650 start.go:495] detecting cgroup driver to use...
I1028 11:26:49.050546 1522650 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1028 11:26:49.050599 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1028 11:26:49.067138 1522650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1028 11:26:49.096300 1522650 docker.go:217] disabling cri-docker service (if available) ...
I1028 11:26:49.096370 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1028 11:26:49.124021 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1028 11:26:49.147013 1522650 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1028 11:26:49.325061 1522650 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1028 11:26:49.445234 1522650 docker.go:233] disabling docker service ...
I1028 11:26:49.445306 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1028 11:26:49.462268 1522650 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1028 11:26:49.476297 1522650 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1028 11:26:49.598692 1522650 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1028 11:26:49.721139 1522650 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1028 11:26:49.734266 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1028 11:26:49.756510 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I1028 11:26:49.766487 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1028 11:26:49.776538 1522650 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1028 11:26:49.776607 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1028 11:26:49.786532 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1028 11:26:49.797244 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1028 11:26:49.806851 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1028 11:26:49.816860 1522650 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1028 11:26:49.826074 1522650 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1028 11:26:49.835894 1522650 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1028 11:26:49.844849 1522650 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1028 11:26:49.853785 1522650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 11:26:49.958801 1522650 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1028 11:26:50.175979 1522650 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1028 11:26:50.176063 1522650 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1028 11:26:50.179880 1522650 start.go:563] Will wait 60s for crictl version
I1028 11:26:50.179946 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:26:50.188477 1522650 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1028 11:26:50.234137 1522650 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1028 11:26:50.234214 1522650 ssh_runner.go:195] Run: containerd --version
I1028 11:26:50.267336 1522650 ssh_runner.go:195] Run: containerd --version
I1028 11:26:50.313755 1522650 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
I1028 11:26:50.315009 1522650 cli_runner.go:164] Run: docker network inspect old-k8s-version-674802 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 11:26:50.339895 1522650 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I1028 11:26:50.343773 1522650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1028 11:26:50.362127 1522650 kubeadm.go:883] updating cluster {Name:old-k8s-version-674802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1028 11:26:50.362231 1522650 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1028 11:26:50.362288 1522650 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:26:50.460278 1522650 containerd.go:627] all images are preloaded for containerd runtime.
I1028 11:26:50.460305 1522650 containerd.go:534] Images already preloaded, skipping extraction
I1028 11:26:50.460364 1522650 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:26:50.506419 1522650 containerd.go:627] all images are preloaded for containerd runtime.
I1028 11:26:50.506449 1522650 cache_images.go:84] Images are preloaded, skipping loading
I1028 11:26:50.506458 1522650 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I1028 11:26:50.506614 1522650 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-674802 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1028 11:26:50.506702 1522650 ssh_runner.go:195] Run: sudo crictl info
I1028 11:26:50.557105 1522650 cni.go:84] Creating CNI manager for ""
I1028 11:26:50.557130 1522650 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1028 11:26:50.557139 1522650 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1028 11:26:50.557160 1522650 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-674802 NodeName:old-k8s-version-674802 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I1028 11:26:50.557285 1522650 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-674802"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1028 11:26:50.557351 1522650 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I1028 11:26:50.566873 1522650 binaries.go:44] Found k8s binaries, skipping transfer
I1028 11:26:50.566984 1522650 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1028 11:26:50.575993 1522650 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I1028 11:26:50.595920 1522650 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1028 11:26:50.615030 1522650 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I1028 11:26:50.634097 1522650 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I1028 11:26:50.637774 1522650 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1028 11:26:50.670452 1522650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 11:26:50.789010 1522650 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1028 11:26:50.803937 1522650 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802 for IP: 192.168.76.2
I1028 11:26:50.803959 1522650 certs.go:194] generating shared ca certs ...
I1028 11:26:50.803975 1522650 certs.go:226] acquiring lock for ca certs: {Name:mk0d3ceca6221298cea760035b38b9c704e7b693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:26:50.804101 1522650 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.key
I1028 11:26:50.804145 1522650 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.key
I1028 11:26:50.804159 1522650 certs.go:256] generating profile certs ...
I1028 11:26:50.804241 1522650 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/client.key
I1028 11:26:50.804309 1522650 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/apiserver.key.bd2ec1af
I1028 11:26:50.804352 1522650 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/proxy-client.key
I1028 11:26:50.804465 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098.pem (1338 bytes)
W1028 11:26:50.804499 1522650 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098_empty.pem, impossibly tiny 0 bytes
I1028 11:26:50.804507 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem (1675 bytes)
I1028 11:26:50.804531 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem (1078 bytes)
I1028 11:26:50.804553 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem (1123 bytes)
I1028 11:26:50.804573 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem (1675 bytes)
I1028 11:26:50.804617 1522650 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem (1708 bytes)
I1028 11:26:50.805228 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1028 11:26:50.832202 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1028 11:26:50.902360 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1028 11:26:50.941743 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1028 11:26:50.989445 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I1028 11:26:51.044784 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1028 11:26:51.079320 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1028 11:26:51.105341 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/old-k8s-version-674802/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1028 11:26:51.151280 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem --> /usr/share/ca-certificates/13190982.pem (1708 bytes)
I1028 11:26:51.184932 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1028 11:26:51.214347 1522650 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098.pem --> /usr/share/ca-certificates/1319098.pem (1338 bytes)
I1028 11:26:51.238859 1522650 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1028 11:26:51.257470 1522650 ssh_runner.go:195] Run: openssl version
I1028 11:26:51.263613 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190982.pem && ln -fs /usr/share/ca-certificates/13190982.pem /etc/ssl/certs/13190982.pem"
I1028 11:26:51.273432 1522650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190982.pem
I1028 11:26:51.277389 1522650 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:48 /usr/share/ca-certificates/13190982.pem
I1028 11:26:51.277510 1522650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190982.pem
I1028 11:26:51.285038 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13190982.pem /etc/ssl/certs/3ec20f2e.0"
I1028 11:26:51.294408 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1028 11:26:51.304498 1522650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1028 11:26:51.308461 1522650 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:41 /usr/share/ca-certificates/minikubeCA.pem
I1028 11:26:51.308583 1522650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1028 11:26:51.316187 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1028 11:26:51.326008 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1319098.pem && ln -fs /usr/share/ca-certificates/1319098.pem /etc/ssl/certs/1319098.pem"
I1028 11:26:51.335282 1522650 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1319098.pem
I1028 11:26:51.339841 1522650 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:48 /usr/share/ca-certificates/1319098.pem
I1028 11:26:51.339910 1522650 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1319098.pem
I1028 11:26:51.347851 1522650 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1319098.pem /etc/ssl/certs/51391683.0"
I1028 11:26:51.357263 1522650 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1028 11:26:51.361661 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I1028 11:26:51.369429 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I1028 11:26:51.377289 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I1028 11:26:51.384634 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I1028 11:26:51.392607 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I1028 11:26:51.400147 1522650 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I1028 11:26:51.408288 1522650 kubeadm.go:392] StartCluster: {Name:old-k8s-version-674802 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-674802 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 11:26:51.408412 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1028 11:26:51.408510 1522650 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1028 11:26:51.468025 1522650 cri.go:89] found id: "2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
I1028 11:26:51.468049 1522650 cri.go:89] found id: "120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
I1028 11:26:51.468054 1522650 cri.go:89] found id: "794cbb23bfba6dd5b645283f7c87ee46bd33b5c5728a364d13fbce246d0811a5"
I1028 11:26:51.468059 1522650 cri.go:89] found id: "8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
I1028 11:26:51.468062 1522650 cri.go:89] found id: "4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
I1028 11:26:51.468066 1522650 cri.go:89] found id: "ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
I1028 11:26:51.468070 1522650 cri.go:89] found id: "01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
I1028 11:26:51.468073 1522650 cri.go:89] found id: "857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
I1028 11:26:51.468078 1522650 cri.go:89] found id: ""
I1028 11:26:51.468143 1522650 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I1028 11:26:51.481164 1522650 cri.go:116] JSON = null
W1028 11:26:51.481216 1522650 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I1028 11:26:51.481275 1522650 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1028 11:26:51.491812 1522650 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I1028 11:26:51.491832 1522650 kubeadm.go:593] restartPrimaryControlPlane start ...
I1028 11:26:51.491882 1522650 ssh_runner.go:195] Run: sudo test -d /data/minikube
I1028 11:26:51.500270 1522650 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I1028 11:26:51.500843 1522650 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-674802" does not appear in /home/jenkins/minikube-integration/19876-1313708/kubeconfig
I1028 11:26:51.501125 1522650 kubeconfig.go:62] /home/jenkins/minikube-integration/19876-1313708/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-674802" cluster setting kubeconfig missing "old-k8s-version-674802" context setting]
I1028 11:26:51.501526 1522650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/kubeconfig: {Name:mk63efc7fcbbc1d4439be659e836c582c1d1641a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:26:51.503108 1522650 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I1028 11:26:51.512702 1522650 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I1028 11:26:51.512747 1522650 kubeadm.go:597] duration metric: took 20.899746ms to restartPrimaryControlPlane
I1028 11:26:51.512781 1522650 kubeadm.go:394] duration metric: took 104.501349ms to StartCluster
I1028 11:26:51.512798 1522650 settings.go:142] acquiring lock: {Name:mk753f039bf116e385865ce8de020c5ca21e9c34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:26:51.512884 1522650 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19876-1313708/kubeconfig
I1028 11:26:51.513567 1522650 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/kubeconfig: {Name:mk63efc7fcbbc1d4439be659e836c582c1d1641a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:26:51.513849 1522650 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1028 11:26:51.514088 1522650 config.go:182] Loaded profile config "old-k8s-version-674802": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1028 11:26:51.514202 1522650 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I1028 11:26:51.514494 1522650 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-674802"
I1028 11:26:51.514511 1522650 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-674802"
W1028 11:26:51.514523 1522650 addons.go:243] addon storage-provisioner should already be in state true
I1028 11:26:51.514553 1522650 host.go:66] Checking if "old-k8s-version-674802" exists ...
I1028 11:26:51.515061 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
I1028 11:26:51.515345 1522650 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-674802"
I1028 11:26:51.515387 1522650 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-674802"
I1028 11:26:51.515768 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
I1028 11:26:51.516270 1522650 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-674802"
I1028 11:26:51.516295 1522650 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-674802"
W1028 11:26:51.516316 1522650 addons.go:243] addon metrics-server should already be in state true
I1028 11:26:51.516354 1522650 host.go:66] Checking if "old-k8s-version-674802" exists ...
I1028 11:26:51.516874 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
I1028 11:26:51.518952 1522650 addons.go:69] Setting dashboard=true in profile "old-k8s-version-674802"
I1028 11:26:51.518991 1522650 addons.go:234] Setting addon dashboard=true in "old-k8s-version-674802"
W1028 11:26:51.519109 1522650 addons.go:243] addon dashboard should already be in state true
I1028 11:26:51.519180 1522650 host.go:66] Checking if "old-k8s-version-674802" exists ...
I1028 11:26:51.519419 1522650 out.go:177] * Verifying Kubernetes components...
I1028 11:26:51.521384 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
I1028 11:26:51.522915 1522650 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 11:26:51.554843 1522650 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1028 11:26:51.557304 1522650 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1028 11:26:51.557330 1522650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1028 11:26:51.557392 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:51.569474 1522650 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I1028 11:26:51.575747 1522650 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I1028 11:26:51.575774 1522650 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I1028 11:26:51.575846 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:51.603148 1522650 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-674802"
W1028 11:26:51.603173 1522650 addons.go:243] addon default-storageclass should already be in state true
I1028 11:26:51.603202 1522650 host.go:66] Checking if "old-k8s-version-674802" exists ...
I1028 11:26:51.603636 1522650 cli_runner.go:164] Run: docker container inspect old-k8s-version-674802 --format={{.State.Status}}
I1028 11:26:51.615737 1522650 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I1028 11:26:51.619808 1522650 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I1028 11:26:51.622191 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I1028 11:26:51.622216 1522650 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I1028 11:26:51.622293 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:51.646206 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:51.654133 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:51.665394 1522650 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I1028 11:26:51.665415 1522650 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1028 11:26:51.665475 1522650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-674802
I1028 11:26:51.685383 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:51.705591 1522650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40375 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/old-k8s-version-674802/id_rsa Username:docker}
I1028 11:26:51.734351 1522650 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1028 11:26:51.792846 1522650 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-674802" to be "Ready" ...
I1028 11:26:51.845870 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 11:26:51.911602 1522650 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I1028 11:26:51.911657 1522650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I1028 11:26:51.953990 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I1028 11:26:51.954015 1522650 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I1028 11:26:51.961134 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1028 11:26:52.021552 1522650 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I1028 11:26:52.021618 1522650 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I1028 11:26:52.028608 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I1028 11:26:52.028695 1522650 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I1028 11:26:52.123324 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I1028 11:26:52.123389 1522650 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I1028 11:26:52.130057 1522650 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I1028 11:26:52.130127 1522650 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
W1028 11:26:52.140363 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.140498 1522650 retry.go:31] will retry after 253.713134ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.186755 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I1028 11:26:52.186826 1522650 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I1028 11:26:52.212159 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 11:26:52.231048 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.231133 1522650 retry.go:31] will retry after 208.640397ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.235574 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I1028 11:26:52.235657 1522650 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I1028 11:26:52.257737 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I1028 11:26:52.257812 1522650 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I1028 11:26:52.320514 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I1028 11:26:52.320586 1522650 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
W1028 11:26:52.382189 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.382267 1522650 retry.go:31] will retry after 350.486593ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.394589 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I1028 11:26:52.394735 1522650 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I1028 11:26:52.394717 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 11:26:52.439424 1522650 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I1028 11:26:52.439505 1522650 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I1028 11:26:52.439954 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 11:26:52.509815 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 11:26:52.571870 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.571958 1522650 retry.go:31] will retry after 279.599257ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 11:26:52.634449 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.634522 1522650 retry.go:31] will retry after 455.650149ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 11:26:52.674393 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.674471 1522650 retry.go:31] will retry after 215.350311ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.733673 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 11:26:52.837754 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.837792 1522650 retry.go:31] will retry after 248.280633ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.851959 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 11:26:52.890257 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 11:26:52.967770 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:52.967806 1522650 retry.go:31] will retry after 410.199026ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 11:26:53.049496 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.049537 1522650 retry.go:31] will retry after 279.498608ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.086743 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1028 11:26:53.091133 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1028 11:26:53.259664 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.259699 1522650 retry.go:31] will retry after 526.438393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 11:26:53.259744 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.259758 1522650 retry.go:31] will retry after 500.371946ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.329567 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1028 11:26:53.378898 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1028 11:26:53.441097 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.441134 1522650 retry.go:31] will retry after 556.126662ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 11:26:53.519558 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.519601 1522650 retry.go:31] will retry after 1.095915489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.760828 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 11:26:53.787216 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1028 11:26:53.793794 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": dial tcp 192.168.76.2:8443: connect: connection refused
W1028 11:26:53.886002 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.886081 1522650 retry.go:31] will retry after 758.874318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 11:26:53.962734 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.962814 1522650 retry.go:31] will retry after 550.647538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:53.998122 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 11:26:54.100989 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:54.101074 1522650 retry.go:31] will retry after 1.198557101s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:54.514278 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1028 11:26:54.615754 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1028 11:26:54.622188 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:54.622289 1522650 retry.go:31] will retry after 1.015831792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:54.645506 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1028 11:26:54.793964 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:54.794074 1522650 retry.go:31] will retry after 1.759219185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 11:26:54.835502 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:54.835589 1522650 retry.go:31] will retry after 1.351958061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:55.300724 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 11:26:55.420315 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:55.420352 1522650 retry.go:31] will retry after 1.848775647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:55.638820 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 11:26:55.733149 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:55.733187 1522650 retry.go:31] will retry after 2.083947839s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:56.188389 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W1028 11:26:56.282971 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:56.283000 1522650 retry.go:31] will retry after 2.775690342s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:56.293526 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": dial tcp 192.168.76.2:8443: connect: connection refused
I1028 11:26:56.554030 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1028 11:26:56.660380 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:56.660410 1522650 retry.go:31] will retry after 1.636434797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:57.269399 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 11:26:57.376808 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:57.376854 1522650 retry.go:31] will retry after 1.712727594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:57.818353 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 11:26:57.910864 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:57.910894 1522650 retry.go:31] will retry after 1.53323177s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:58.297721 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W1028 11:26:58.407643 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:58.407672 1522650 retry.go:31] will retry after 1.974364231s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:58.793475 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": dial tcp 192.168.76.2:8443: connect: connection refused
I1028 11:26:59.058947 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 11:26:59.090344 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W1028 11:26:59.181013 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:59.181049 1522650 retry.go:31] will retry after 3.909179468s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W1028 11:26:59.246632 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:59.246671 1522650 retry.go:31] will retry after 2.560689734s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:59.444336 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W1028 11:26:59.537550 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:26:59.537584 1522650 retry.go:31] will retry after 4.434253189s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I1028 11:27:00.383125 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 11:27:00.793734 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": dial tcp 192.168.76.2:8443: connect: connection refused
I1028 11:27:01.808284 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I1028 11:27:03.091337 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I1028 11:27:03.972411 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I1028 11:27:10.874306 1522650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.491139351s)
W1028 11:27:10.874345 1522650 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I1028 11:27:10.874363 1522650 retry.go:31] will retry after 3.978863022s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
Unable to connect to the server: net/http: TLS handshake timeout
I1028 11:27:11.293948 1522650 node_ready.go:53] error getting node "old-k8s-version-674802": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-674802": net/http: TLS handshake timeout
I1028 11:27:11.724020 1522650 node_ready.go:49] node "old-k8s-version-674802" has status "Ready":"True"
I1028 11:27:11.724049 1522650 node_ready.go:38] duration metric: took 19.931110641s for node "old-k8s-version-674802" to be "Ready" ...
I1028 11:27:11.724058 1522650 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1028 11:27:11.961765 1522650 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-wlp24" in "kube-system" namespace to be "Ready" ...
I1028 11:27:12.070538 1522650 pod_ready.go:93] pod "coredns-74ff55c5b-wlp24" in "kube-system" namespace has status "Ready":"True"
I1028 11:27:12.070568 1522650 pod_ready.go:82] duration metric: took 106.197986ms for pod "coredns-74ff55c5b-wlp24" in "kube-system" namespace to be "Ready" ...
I1028 11:27:12.070582 1522650 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
I1028 11:27:12.102882 1522650 pod_ready.go:93] pod "etcd-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"True"
I1028 11:27:12.102955 1522650 pod_ready.go:82] duration metric: took 32.364855ms for pod "etcd-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
I1028 11:27:12.102985 1522650 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
I1028 11:27:13.086088 1522650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.994713981s)
I1028 11:27:13.086452 1522650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.114011735s)
I1028 11:27:13.086518 1522650 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-674802"
I1028 11:27:13.086602 1522650 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.278281187s)
I1028 11:27:13.089571 1522650 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-674802 addons enable metrics-server
I1028 11:27:14.109158 1522650 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:14.854261 1522650 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I1028 11:27:15.629946 1522650 out.go:177] * Enabled addons: metrics-server, dashboard, default-storageclass, storage-provisioner
I1028 11:27:15.632956 1522650 addons.go:510] duration metric: took 24.118743014s for enable addons: enabled=[metrics-server dashboard default-storageclass storage-provisioner]
I1028 11:27:16.115452 1522650 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:18.610031 1522650 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:20.609886 1522650 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"True"
I1028 11:27:20.609911 1522650 pod_ready.go:82] duration metric: took 8.506904709s for pod "kube-apiserver-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
I1028 11:27:20.609924 1522650 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
I1028 11:27:22.616429 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:25.116549 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:27.616074 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:29.616632 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:32.117065 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:34.616765 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:36.616869 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:38.619813 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:41.116175 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:43.116575 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:45.118253 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:47.134364 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:49.624345 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:52.116376 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:54.116854 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:56.117962 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:27:58.616380 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:00.616542 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:02.617251 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:05.115519 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:07.116478 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:09.617133 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:12.173571 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:14.615949 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:16.617708 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:19.115921 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:21.116930 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:23.615332 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:25.618942 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:28.116393 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:30.117263 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:32.616331 1522650 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:33.615914 1522650 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"True"
I1028 11:28:33.615939 1522650 pod_ready.go:82] duration metric: took 1m13.006007915s for pod "kube-controller-manager-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
I1028 11:28:33.615951 1522650 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sdcls" in "kube-system" namespace to be "Ready" ...
I1028 11:28:33.621161 1522650 pod_ready.go:93] pod "kube-proxy-sdcls" in "kube-system" namespace has status "Ready":"True"
I1028 11:28:33.621190 1522650 pod_ready.go:82] duration metric: took 5.230393ms for pod "kube-proxy-sdcls" in "kube-system" namespace to be "Ready" ...
I1028 11:28:33.621203 1522650 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
I1028 11:28:33.626137 1522650 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-674802" in "kube-system" namespace has status "Ready":"True"
I1028 11:28:33.626166 1522650 pod_ready.go:82] duration metric: took 4.955793ms for pod "kube-scheduler-old-k8s-version-674802" in "kube-system" namespace to be "Ready" ...
I1028 11:28:33.626177 1522650 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace to be "Ready" ...
I1028 11:28:35.632722 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:37.632853 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:40.132535 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:42.133063 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:44.632409 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:47.133283 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:49.632557 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:51.633555 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:54.131835 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:56.132042 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:28:58.134501 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:00.134811 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:02.631708 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:05.133327 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:07.632559 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:10.132196 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:12.133134 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:14.136072 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:16.138475 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:18.633285 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:21.132111 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:23.132241 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:25.632359 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:27.633222 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:29.633651 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:32.134732 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:34.632661 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:36.633305 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:39.133679 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:41.631857 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:43.633865 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:46.132741 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:48.632693 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:50.633998 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:53.132201 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:55.132868 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:57.133028 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:29:59.632114 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:01.632659 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:04.132047 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:06.132395 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:08.132439 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:10.632541 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:12.633400 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:15.132962 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:17.632661 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:20.132524 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:22.133109 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:24.632048 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:26.632153 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:28.632624 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:30.632947 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:33.131694 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:35.133071 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:37.632517 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:39.632741 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:42.133251 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:44.631573 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:46.632230 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:48.632354 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:50.632946 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:53.132077 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:55.132720 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:30:57.632352 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:00.133619 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:02.632622 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:05.132476 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:07.132537 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:09.132882 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:11.632721 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:13.694163 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:16.133028 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:18.633339 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:21.137509 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:23.632552 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:25.632827 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:28.132669 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:30.132988 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:32.632463 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:35.132552 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:37.133003 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:39.632266 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:41.632492 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:44.132365 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:46.133181 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:48.633037 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:51.132608 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:53.632395 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:56.132659 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:31:58.632631 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:00.632696 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:03.133533 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:05.633050 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:08.131970 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:10.132316 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:12.132996 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:14.631556 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:16.635524 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:19.131917 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:21.132458 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:23.641916 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:26.132630 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:28.633137 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:31.132354 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:33.132407 1522650 pod_ready.go:103] pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace has status "Ready":"False"
I1028 11:32:33.631824 1522650 pod_ready.go:82] duration metric: took 4m0.005630495s for pod "metrics-server-9975d5f86-lv8qx" in "kube-system" namespace to be "Ready" ...
E1028 11:32:33.631849 1522650 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
I1028 11:32:33.631859 1522650 pod_ready.go:39] duration metric: took 5m21.907789977s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1028 11:32:33.631875 1522650 api_server.go:52] waiting for apiserver process to appear ...
I1028 11:32:33.631912 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 11:32:33.631979 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 11:32:33.671097 1522650 cri.go:89] found id: "c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
I1028 11:32:33.671160 1522650 cri.go:89] found id: "ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
I1028 11:32:33.671179 1522650 cri.go:89] found id: ""
I1028 11:32:33.671201 1522650 logs.go:282] 2 containers: [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc]
I1028 11:32:33.671290 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.674939 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.678299 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 11:32:33.678365 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 11:32:33.718743 1522650 cri.go:89] found id: "6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
I1028 11:32:33.718767 1522650 cri.go:89] found id: "01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
I1028 11:32:33.718772 1522650 cri.go:89] found id: ""
I1028 11:32:33.718780 1522650 logs.go:282] 2 containers: [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232]
I1028 11:32:33.718835 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.722744 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.726556 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 11:32:33.726631 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 11:32:33.765910 1522650 cri.go:89] found id: "b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
I1028 11:32:33.765934 1522650 cri.go:89] found id: "2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
I1028 11:32:33.765940 1522650 cri.go:89] found id: ""
I1028 11:32:33.765947 1522650 logs.go:282] 2 containers: [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f]
I1028 11:32:33.766003 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.769566 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.772996 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 11:32:33.773098 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 11:32:33.812180 1522650 cri.go:89] found id: "31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
I1028 11:32:33.812243 1522650 cri.go:89] found id: "857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
I1028 11:32:33.812255 1522650 cri.go:89] found id: ""
I1028 11:32:33.812263 1522650 logs.go:282] 2 containers: [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056]
I1028 11:32:33.812322 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.816121 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.819678 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 11:32:33.819779 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 11:32:33.857214 1522650 cri.go:89] found id: "c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
I1028 11:32:33.857283 1522650 cri.go:89] found id: "8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
I1028 11:32:33.857297 1522650 cri.go:89] found id: ""
I1028 11:32:33.857305 1522650 logs.go:282] 2 containers: [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015]
I1028 11:32:33.857368 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.860914 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.864200 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 11:32:33.864267 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 11:32:33.915951 1522650 cri.go:89] found id: "056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
I1028 11:32:33.916024 1522650 cri.go:89] found id: "4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
I1028 11:32:33.916042 1522650 cri.go:89] found id: ""
I1028 11:32:33.916061 1522650 logs.go:282] 2 containers: [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7]
I1028 11:32:33.916149 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.919578 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.922823 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 11:32:33.922911 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 11:32:33.966708 1522650 cri.go:89] found id: "42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
I1028 11:32:33.966732 1522650 cri.go:89] found id: "120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
I1028 11:32:33.966737 1522650 cri.go:89] found id: ""
I1028 11:32:33.966745 1522650 logs.go:282] 2 containers: [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33]
I1028 11:32:33.966834 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.970820 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:33.974248 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 11:32:33.974365 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 11:32:34.016524 1522650 cri.go:89] found id: "9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
I1028 11:32:34.016553 1522650 cri.go:89] found id: ""
I1028 11:32:34.016562 1522650 logs.go:282] 1 containers: [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b]
I1028 11:32:34.016619 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:34.020458 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 11:32:34.020542 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 11:32:34.066319 1522650 cri.go:89] found id: "af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
I1028 11:32:34.066393 1522650 cri.go:89] found id: "e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
I1028 11:32:34.066411 1522650 cri.go:89] found id: ""
I1028 11:32:34.066425 1522650 logs.go:282] 2 containers: [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5]
I1028 11:32:34.066496 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:34.069913 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:34.073421 1522650 logs.go:123] Gathering logs for describe nodes ...
I1028 11:32:34.073480 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 11:32:34.210043 1522650 logs.go:123] Gathering logs for etcd [01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232] ...
I1028 11:32:34.210070 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
I1028 11:32:34.253651 1522650 logs.go:123] Gathering logs for coredns [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59] ...
I1028 11:32:34.253678 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
I1028 11:32:34.291333 1522650 logs.go:123] Gathering logs for kube-scheduler [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824] ...
I1028 11:32:34.291362 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
I1028 11:32:34.331399 1522650 logs.go:123] Gathering logs for kube-controller-manager [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438] ...
I1028 11:32:34.331554 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
I1028 11:32:34.391065 1522650 logs.go:123] Gathering logs for kindnet [120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33] ...
I1028 11:32:34.391103 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
I1028 11:32:34.448609 1522650 logs.go:123] Gathering logs for kubernetes-dashboard [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b] ...
I1028 11:32:34.448637 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
I1028 11:32:34.520640 1522650 logs.go:123] Gathering logs for kubelet ...
I1028 11:32:34.520667 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1028 11:32:34.575602 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.736433 662 reflector.go:138] object-"default"/"default-token-rkh5t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rkh5t" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:34.575998 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.767488 662 reflector.go:138] object-"kube-system"/"metrics-server-token-bnmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bnmqq" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:34.576215 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.769521 662 reflector.go:138] object-"kube-system"/"kindnet-token-rljtj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rljtj" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:34.576429 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.783592 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-v6b5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-v6b5p" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:34.576637 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786532 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:34.576861 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786656 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-7s7sg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-7s7sg" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:34.577070 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786717 662 reflector.go:138] object-"kube-system"/"coredns-token-t6lq7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6lq7" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:34.577270 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786765 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:34.588025 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:13 old-k8s-version-674802 kubelet[662]: E1028 11:27:13.704595 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:34.590509 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:14 old-k8s-version-674802 kubelet[662]: E1028 11:27:14.680234 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.593337 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:27 old-k8s-version-674802 kubelet[662]: E1028 11:27:27.405191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:34.595471 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:36 old-k8s-version-674802 kubelet[662]: E1028 11:27:36.774117 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.595816 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:37 old-k8s-version-674802 kubelet[662]: E1028 11:27:37.779544 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.596145 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:38 old-k8s-version-674802 kubelet[662]: E1028 11:27:38.781616 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.596330 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:39 old-k8s-version-674802 kubelet[662]: E1028 11:27:39.404833 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.597109 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:45 old-k8s-version-674802 kubelet[662]: E1028 11:27:45.806023 662 pod_workers.go:191] Error syncing pod eb6e0fb4-e030-4eb7-8b96-477de7691df6 ("storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"
W1028 11:32:34.599930 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:51 old-k8s-version-674802 kubelet[662]: E1028 11:27:51.412111 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:34.600524 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:52 old-k8s-version-674802 kubelet[662]: E1028 11:27:52.828529 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.601001 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:58 old-k8s-version-674802 kubelet[662]: E1028 11:27:58.495326 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.601187 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:06 old-k8s-version-674802 kubelet[662]: E1028 11:28:06.398915 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.601514 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:11 old-k8s-version-674802 kubelet[662]: E1028 11:28:11.397340 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.601701 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:18 old-k8s-version-674802 kubelet[662]: E1028 11:28:18.397191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.602284 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:25 old-k8s-version-674802 kubelet[662]: E1028 11:28:25.930176 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.602842 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:28 old-k8s-version-674802 kubelet[662]: E1028 11:28:28.494838 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.605303 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:32 old-k8s-version-674802 kubelet[662]: E1028 11:28:32.410167 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:34.605634 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:39 old-k8s-version-674802 kubelet[662]: E1028 11:28:39.398920 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.605818 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:43 old-k8s-version-674802 kubelet[662]: E1028 11:28:43.399781 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.606151 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:50 old-k8s-version-674802 kubelet[662]: E1028 11:28:50.396875 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.606339 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:56 old-k8s-version-674802 kubelet[662]: E1028 11:28:56.397361 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.606668 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:02 old-k8s-version-674802 kubelet[662]: E1028 11:29:02.396819 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.606849 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:08 old-k8s-version-674802 kubelet[662]: E1028 11:29:08.398191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.607431 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:14 old-k8s-version-674802 kubelet[662]: E1028 11:29:14.078563 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.607764 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:18 old-k8s-version-674802 kubelet[662]: E1028 11:29:18.494837 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.607959 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:22 old-k8s-version-674802 kubelet[662]: E1028 11:29:22.397401 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.608289 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:29 old-k8s-version-674802 kubelet[662]: E1028 11:29:29.397589 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.608471 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:33 old-k8s-version-674802 kubelet[662]: E1028 11:29:33.397285 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.608798 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:40 old-k8s-version-674802 kubelet[662]: E1028 11:29:40.396913 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.608982 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:45 old-k8s-version-674802 kubelet[662]: E1028 11:29:45.397674 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.609307 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:52 old-k8s-version-674802 kubelet[662]: E1028 11:29:52.396836 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.611779 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:57 old-k8s-version-674802 kubelet[662]: E1028 11:29:57.431254 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:34.612107 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:06 old-k8s-version-674802 kubelet[662]: E1028 11:30:06.396711 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.612291 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:12 old-k8s-version-674802 kubelet[662]: E1028 11:30:12.397420 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.612615 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:19 old-k8s-version-674802 kubelet[662]: E1028 11:30:19.397829 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.612796 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:23 old-k8s-version-674802 kubelet[662]: E1028 11:30:23.400863 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.613385 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:35 old-k8s-version-674802 kubelet[662]: E1028 11:30:35.284773 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.613566 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:37 old-k8s-version-674802 kubelet[662]: E1028 11:30:37.398847 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.613890 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:38 old-k8s-version-674802 kubelet[662]: E1028 11:30:38.494821 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.614074 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:52 old-k8s-version-674802 kubelet[662]: E1028 11:30:52.398247 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.614401 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:53 old-k8s-version-674802 kubelet[662]: E1028 11:30:53.397067 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.614724 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.396824 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.614909 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.398636 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.615234 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397153 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.615415 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397448 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.615751 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:30 old-k8s-version-674802 kubelet[662]: E1028 11:31:30.396768 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.615935 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:32 old-k8s-version-674802 kubelet[662]: E1028 11:31:32.397834 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.616260 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: E1028 11:31:42.396795 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.616442 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:47 old-k8s-version-674802 kubelet[662]: E1028 11:31:47.398090 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.616766 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: E1028 11:31:56.396772 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.616947 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:58 old-k8s-version-674802 kubelet[662]: E1028 11:31:58.397362 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.617274 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.617455 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.617780 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:34.617961 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:34.618293 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
I1028 11:32:34.618303 1522650 logs.go:123] Gathering logs for dmesg ...
I1028 11:32:34.618317 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 11:32:34.637281 1522650 logs.go:123] Gathering logs for kube-apiserver [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6] ...
I1028 11:32:34.637308 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
I1028 11:32:34.701952 1522650 logs.go:123] Gathering logs for kube-apiserver [ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc] ...
I1028 11:32:34.701983 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
I1028 11:32:34.753229 1522650 logs.go:123] Gathering logs for coredns [2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f] ...
I1028 11:32:34.753267 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
I1028 11:32:34.818573 1522650 logs.go:123] Gathering logs for container status ...
I1028 11:32:34.818602 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 11:32:34.863856 1522650 logs.go:123] Gathering logs for kube-proxy [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf] ...
I1028 11:32:34.863883 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
I1028 11:32:34.912972 1522650 logs.go:123] Gathering logs for kindnet [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3] ...
I1028 11:32:34.913001 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
I1028 11:32:34.953865 1522650 logs.go:123] Gathering logs for containerd ...
I1028 11:32:34.953893 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 11:32:35.019851 1522650 logs.go:123] Gathering logs for storage-provisioner [e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5] ...
I1028 11:32:35.019890 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
I1028 11:32:35.059465 1522650 logs.go:123] Gathering logs for etcd [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148] ...
I1028 11:32:35.059490 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
I1028 11:32:35.105788 1522650 logs.go:123] Gathering logs for kube-scheduler [857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056] ...
I1028 11:32:35.105818 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
I1028 11:32:35.147379 1522650 logs.go:123] Gathering logs for kube-proxy [8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015] ...
I1028 11:32:35.147422 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
I1028 11:32:35.184732 1522650 logs.go:123] Gathering logs for kube-controller-manager [4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7] ...
I1028 11:32:35.184759 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
I1028 11:32:35.257229 1522650 logs.go:123] Gathering logs for storage-provisioner [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8] ...
I1028 11:32:35.257265 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
I1028 11:32:35.307930 1522650 out.go:358] Setting ErrFile to fd 2...
I1028 11:32:35.307955 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1028 11:32:35.308039 1522650 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1028 11:32:35.308052 1522650 out.go:270] Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:35.308074 1522650 out.go:270] Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:35.308084 1522650 out.go:270] Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:35.308089 1522650 out.go:270] Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:35.308094 1522650 out.go:270] Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
I1028 11:32:35.308105 1522650 out.go:358] Setting ErrFile to fd 2...
I1028 11:32:35.308112 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:32:45.308563 1522650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1028 11:32:45.323590 1522650 api_server.go:72] duration metric: took 5m53.809706364s to wait for apiserver process to appear ...
I1028 11:32:45.323614 1522650 api_server.go:88] waiting for apiserver healthz status ...
I1028 11:32:45.323723 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 11:32:45.323782 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 11:32:45.369849 1522650 cri.go:89] found id: "c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
I1028 11:32:45.369868 1522650 cri.go:89] found id: "ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
I1028 11:32:45.369873 1522650 cri.go:89] found id: ""
I1028 11:32:45.369880 1522650 logs.go:282] 2 containers: [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc]
I1028 11:32:45.369934 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.374584 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.379004 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 11:32:45.379074 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 11:32:45.433346 1522650 cri.go:89] found id: "6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
I1028 11:32:45.433423 1522650 cri.go:89] found id: "01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
I1028 11:32:45.433442 1522650 cri.go:89] found id: ""
I1028 11:32:45.433462 1522650 logs.go:282] 2 containers: [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232]
I1028 11:32:45.433546 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.438315 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.441971 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 11:32:45.442046 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 11:32:45.507419 1522650 cri.go:89] found id: "b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
I1028 11:32:45.507441 1522650 cri.go:89] found id: "2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
I1028 11:32:45.507446 1522650 cri.go:89] found id: ""
I1028 11:32:45.507453 1522650 logs.go:282] 2 containers: [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f]
I1028 11:32:45.507510 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.513603 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.517373 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 11:32:45.517452 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 11:32:45.565346 1522650 cri.go:89] found id: "31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
I1028 11:32:45.565381 1522650 cri.go:89] found id: "857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
I1028 11:32:45.565386 1522650 cri.go:89] found id: ""
I1028 11:32:45.565393 1522650 logs.go:282] 2 containers: [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056]
I1028 11:32:45.565455 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.569124 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.572626 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 11:32:45.572699 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 11:32:45.624046 1522650 cri.go:89] found id: "c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
I1028 11:32:45.624079 1522650 cri.go:89] found id: "8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
I1028 11:32:45.624084 1522650 cri.go:89] found id: ""
I1028 11:32:45.624091 1522650 logs.go:282] 2 containers: [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015]
I1028 11:32:45.624152 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.627765 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.631106 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 11:32:45.631183 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 11:32:45.680421 1522650 cri.go:89] found id: "056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
I1028 11:32:45.680444 1522650 cri.go:89] found id: "4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
I1028 11:32:45.680460 1522650 cri.go:89] found id: ""
I1028 11:32:45.680468 1522650 logs.go:282] 2 containers: [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7]
I1028 11:32:45.680531 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.684137 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.687407 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 11:32:45.687486 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 11:32:45.741649 1522650 cri.go:89] found id: "42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
I1028 11:32:45.741671 1522650 cri.go:89] found id: "120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
I1028 11:32:45.741675 1522650 cri.go:89] found id: ""
I1028 11:32:45.741683 1522650 logs.go:282] 2 containers: [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33]
I1028 11:32:45.741741 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.745863 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.749779 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 11:32:45.749843 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 11:32:45.801413 1522650 cri.go:89] found id: "9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
I1028 11:32:45.801471 1522650 cri.go:89] found id: ""
I1028 11:32:45.801481 1522650 logs.go:282] 1 containers: [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b]
I1028 11:32:45.801539 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.805656 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 11:32:45.805718 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 11:32:45.904645 1522650 cri.go:89] found id: "af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
I1028 11:32:45.904670 1522650 cri.go:89] found id: "e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
I1028 11:32:45.904675 1522650 cri.go:89] found id: ""
I1028 11:32:45.904682 1522650 logs.go:282] 2 containers: [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5]
I1028 11:32:45.904738 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.910966 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.917821 1522650 logs.go:123] Gathering logs for coredns [2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f] ...
I1028 11:32:45.917843 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
I1028 11:32:45.972711 1522650 logs.go:123] Gathering logs for kindnet [120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33] ...
I1028 11:32:45.972737 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
I1028 11:32:46.067160 1522650 logs.go:123] Gathering logs for kubelet ...
I1028 11:32:46.067189 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1028 11:32:46.128234 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.736433 662 reflector.go:138] object-"default"/"default-token-rkh5t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rkh5t" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.128574 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.767488 662 reflector.go:138] object-"kube-system"/"metrics-server-token-bnmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bnmqq" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.128790 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.769521 662 reflector.go:138] object-"kube-system"/"kindnet-token-rljtj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rljtj" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129006 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.783592 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-v6b5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-v6b5p" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129208 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786532 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129433 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786656 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-7s7sg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-7s7sg" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129655 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786717 662 reflector.go:138] object-"kube-system"/"coredns-token-t6lq7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6lq7" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129858 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786765 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.140618 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:13 old-k8s-version-674802 kubelet[662]: E1028 11:27:13.704595 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.143017 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:14 old-k8s-version-674802 kubelet[662]: E1028 11:27:14.680234 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.145830 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:27 old-k8s-version-674802 kubelet[662]: E1028 11:27:27.405191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.148040 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:36 old-k8s-version-674802 kubelet[662]: E1028 11:27:36.774117 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.148376 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:37 old-k8s-version-674802 kubelet[662]: E1028 11:27:37.779544 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.148704 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:38 old-k8s-version-674802 kubelet[662]: E1028 11:27:38.781616 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.148892 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:39 old-k8s-version-674802 kubelet[662]: E1028 11:27:39.404833 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.149721 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:45 old-k8s-version-674802 kubelet[662]: E1028 11:27:45.806023 662 pod_workers.go:191] Error syncing pod eb6e0fb4-e030-4eb7-8b96-477de7691df6 ("storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"
W1028 11:32:46.152611 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:51 old-k8s-version-674802 kubelet[662]: E1028 11:27:51.412111 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.153207 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:52 old-k8s-version-674802 kubelet[662]: E1028 11:27:52.828529 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.153668 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:58 old-k8s-version-674802 kubelet[662]: E1028 11:27:58.495326 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.153849 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:06 old-k8s-version-674802 kubelet[662]: E1028 11:28:06.398915 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.154173 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:11 old-k8s-version-674802 kubelet[662]: E1028 11:28:11.397340 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.154353 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:18 old-k8s-version-674802 kubelet[662]: E1028 11:28:18.397191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.154935 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:25 old-k8s-version-674802 kubelet[662]: E1028 11:28:25.930176 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.155258 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:28 old-k8s-version-674802 kubelet[662]: E1028 11:28:28.494838 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.157743 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:32 old-k8s-version-674802 kubelet[662]: E1028 11:28:32.410167 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.158074 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:39 old-k8s-version-674802 kubelet[662]: E1028 11:28:39.398920 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.158257 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:43 old-k8s-version-674802 kubelet[662]: E1028 11:28:43.399781 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.158584 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:50 old-k8s-version-674802 kubelet[662]: E1028 11:28:50.396875 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.158767 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:56 old-k8s-version-674802 kubelet[662]: E1028 11:28:56.397361 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.159090 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:02 old-k8s-version-674802 kubelet[662]: E1028 11:29:02.396819 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.159293 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:08 old-k8s-version-674802 kubelet[662]: E1028 11:29:08.398191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.159946 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:14 old-k8s-version-674802 kubelet[662]: E1028 11:29:14.078563 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.160290 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:18 old-k8s-version-674802 kubelet[662]: E1028 11:29:18.494837 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.160481 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:22 old-k8s-version-674802 kubelet[662]: E1028 11:29:22.397401 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.160804 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:29 old-k8s-version-674802 kubelet[662]: E1028 11:29:29.397589 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.160987 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:33 old-k8s-version-674802 kubelet[662]: E1028 11:29:33.397285 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.161311 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:40 old-k8s-version-674802 kubelet[662]: E1028 11:29:40.396913 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.161495 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:45 old-k8s-version-674802 kubelet[662]: E1028 11:29:45.397674 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.161821 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:52 old-k8s-version-674802 kubelet[662]: E1028 11:29:52.396836 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.164259 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:57 old-k8s-version-674802 kubelet[662]: E1028 11:29:57.431254 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.164587 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:06 old-k8s-version-674802 kubelet[662]: E1028 11:30:06.396711 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.164771 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:12 old-k8s-version-674802 kubelet[662]: E1028 11:30:12.397420 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.165094 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:19 old-k8s-version-674802 kubelet[662]: E1028 11:30:19.397829 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.165276 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:23 old-k8s-version-674802 kubelet[662]: E1028 11:30:23.400863 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.165874 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:35 old-k8s-version-674802 kubelet[662]: E1028 11:30:35.284773 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.166056 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:37 old-k8s-version-674802 kubelet[662]: E1028 11:30:37.398847 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.166378 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:38 old-k8s-version-674802 kubelet[662]: E1028 11:30:38.494821 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.166560 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:52 old-k8s-version-674802 kubelet[662]: E1028 11:30:52.398247 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.166884 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:53 old-k8s-version-674802 kubelet[662]: E1028 11:30:53.397067 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.167207 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.396824 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.167388 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.398636 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.167725 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397153 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.167908 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397448 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.168236 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:30 old-k8s-version-674802 kubelet[662]: E1028 11:31:30.396768 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.168418 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:32 old-k8s-version-674802 kubelet[662]: E1028 11:31:32.397834 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.168745 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: E1028 11:31:42.396795 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.168925 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:47 old-k8s-version-674802 kubelet[662]: E1028 11:31:47.398090 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.169249 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: E1028 11:31:56.396772 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.169434 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:58 old-k8s-version-674802 kubelet[662]: E1028 11:31:58.397362 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.169758 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.169942 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.170339 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.170526 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.170854 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.173305 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I1028 11:32:46.173316 1522650 logs.go:123] Gathering logs for etcd [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148] ...
I1028 11:32:46.173330 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
I1028 11:32:46.223969 1522650 logs.go:123] Gathering logs for kube-proxy [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf] ...
I1028 11:32:46.224006 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
I1028 11:32:46.277259 1522650 logs.go:123] Gathering logs for kubernetes-dashboard [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b] ...
I1028 11:32:46.277289 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
I1028 11:32:46.337485 1522650 logs.go:123] Gathering logs for storage-provisioner [e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5] ...
I1028 11:32:46.337520 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
I1028 11:32:46.395372 1522650 logs.go:123] Gathering logs for container status ...
I1028 11:32:46.395422 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 11:32:46.450094 1522650 logs.go:123] Gathering logs for describe nodes ...
I1028 11:32:46.450127 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 11:32:46.647254 1522650 logs.go:123] Gathering logs for kube-scheduler [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824] ...
I1028 11:32:46.647867 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
I1028 11:32:46.698393 1522650 logs.go:123] Gathering logs for kube-controller-manager [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438] ...
I1028 11:32:46.698421 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
I1028 11:32:46.756978 1522650 logs.go:123] Gathering logs for kube-controller-manager [4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7] ...
I1028 11:32:46.757015 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
I1028 11:32:46.831122 1522650 logs.go:123] Gathering logs for kindnet [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3] ...
I1028 11:32:46.831163 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
I1028 11:32:46.880307 1522650 logs.go:123] Gathering logs for etcd [01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232] ...
I1028 11:32:46.880340 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
I1028 11:32:46.936132 1522650 logs.go:123] Gathering logs for kube-scheduler [857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056] ...
I1028 11:32:46.936165 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
I1028 11:32:46.982104 1522650 logs.go:123] Gathering logs for kube-apiserver [ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc] ...
I1028 11:32:46.982133 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
I1028 11:32:47.048875 1522650 logs.go:123] Gathering logs for coredns [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59] ...
I1028 11:32:47.048911 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
I1028 11:32:47.093129 1522650 logs.go:123] Gathering logs for kube-proxy [8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015] ...
I1028 11:32:47.093157 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
I1028 11:32:47.132824 1522650 logs.go:123] Gathering logs for storage-provisioner [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8] ...
I1028 11:32:47.132849 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
I1028 11:32:47.172011 1522650 logs.go:123] Gathering logs for containerd ...
I1028 11:32:47.172037 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 11:32:47.239434 1522650 logs.go:123] Gathering logs for dmesg ...
I1028 11:32:47.239469 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 11:32:47.257467 1522650 logs.go:123] Gathering logs for kube-apiserver [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6] ...
I1028 11:32:47.257498 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
I1028 11:32:47.317252 1522650 out.go:358] Setting ErrFile to fd 2...
I1028 11:32:47.317286 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1028 11:32:47.317350 1522650 out.go:270] X Problems detected in kubelet:
X Problems detected in kubelet:
W1028 11:32:47.317369 1522650 out.go:270] Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:47.317384 1522650 out.go:270] Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:47.317397 1522650 out.go:270] Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:47.317405 1522650 out.go:270] Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:47.317419 1522650 out.go:270] Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I1028 11:32:47.317439 1522650 out.go:358] Setting ErrFile to fd 2...
I1028 11:32:47.317446 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:32:57.318433 1522650 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1028 11:32:57.330588 1522650 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1028 11:32:57.332171 1522650 out.go:201]
W1028 11:32:57.333624 1522650 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1028 11:32:57.333844 1522650 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W1028 11:32:57.333985 1522650 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W1028 11:32:57.334048 1522650 out.go:270] *
*
W1028 11:32:57.335332 1522650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 11:32:57.337475 1522650 out.go:201]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-674802 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-674802
helpers_test.go:235: (dbg) docker inspect old-k8s-version-674802:
-- stdout --
[
{
"Id": "dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7",
"Created": "2024-10-28T11:24:04.195440534Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 1522844,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-10-28T11:26:43.172293132Z",
"FinishedAt": "2024-10-28T11:26:42.190662828Z"
},
"Image": "sha256:e536a13478ac3e12b0286f2242f0931e32c32cc3eeb0139a219c9133dcd9fe90",
"ResolvConfPath": "/var/lib/docker/containers/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7/hostname",
"HostsPath": "/var/lib/docker/containers/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7/hosts",
"LogPath": "/var/lib/docker/containers/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7/dc3c31f51b66fcbdecd16e7e2130cc4bcf0676abcdfd75db06584109bc354ba7-json.log",
"Name": "/old-k8s-version-674802",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-674802:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-674802",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/959053794ac2dcd3b48881ed4ff6293f07dc6e98a0aafbb16fb787c58523d221-init/diff:/var/lib/docker/overlay2/3a4c28ee2a9f0b48a71bf9958e5e93be9c21155427c18565406f15d470c50d00/diff",
"MergedDir": "/var/lib/docker/overlay2/959053794ac2dcd3b48881ed4ff6293f07dc6e98a0aafbb16fb787c58523d221/merged",
"UpperDir": "/var/lib/docker/overlay2/959053794ac2dcd3b48881ed4ff6293f07dc6e98a0aafbb16fb787c58523d221/diff",
"WorkDir": "/var/lib/docker/overlay2/959053794ac2dcd3b48881ed4ff6293f07dc6e98a0aafbb16fb787c58523d221/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-674802",
"Source": "/var/lib/docker/volumes/old-k8s-version-674802/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-674802",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-674802",
"name.minikube.sigs.k8s.io": "old-k8s-version-674802",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "fa682f8af7cd4c13c7751c7e04013881bb7477879d9a41b587770e995db3595c",
"SandboxKey": "/var/run/docker/netns/fa682f8af7cd",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "40375"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "40376"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "40379"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "40377"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "40378"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-674802": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "5781a9c4ca19259a018dae251240a67b66da80fe7be0072f1f7a04b54b46de4f",
"EndpointID": "ac6cd79e2fd3ea9a7822c749fdb308246197d3aff07dd9063e20f68e99ba28aa",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-674802",
"dc3c31f51b66"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-674802 -n old-k8s-version-674802
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-674802 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-674802 logs -n 25: (2.501957355s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-219316 | cert-expiration-219316 | jenkins | v1.34.0 | 28 Oct 24 11:22 UTC | 28 Oct 24 11:23 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-229837 | force-systemd-env-229837 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-229837 | force-systemd-env-229837 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
| start | -p cert-options-136781 | cert-options-136781 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-136781 ssh | cert-options-136781 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-136781 -- sudo | cert-options-136781 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-136781 | cert-options-136781 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:23 UTC |
| start | -p old-k8s-version-674802 | old-k8s-version-674802 | jenkins | v1.34.0 | 28 Oct 24 11:23 UTC | 28 Oct 24 11:26 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-219316 | cert-expiration-219316 | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| addons | enable metrics-server -p old-k8s-version-674802 | old-k8s-version-674802 | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| delete | -p cert-expiration-219316 | cert-expiration-219316 | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
| stop | -p old-k8s-version-674802 | old-k8s-version-674802 | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
| | --alsologtostderr -v=3 | | | | | |
| start | -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:27 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| addons | enable dashboard -p old-k8s-version-674802 | old-k8s-version-674802 | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | 28 Oct 24 11:26 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-674802 | old-k8s-version-674802 | jenkins | v1.34.0 | 28 Oct 24 11:26 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:27 UTC | 28 Oct 24 11:27 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:27 UTC | 28 Oct 24 11:27 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:27 UTC | 28 Oct 24 11:27 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:27 UTC | 28 Oct 24 11:32 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr | | | | | |
| | --wait=true --preload=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
| image | no-preload-196138 image list | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
| delete | -p no-preload-196138 | no-preload-196138 | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | 28 Oct 24 11:32 UTC |
| start | -p embed-certs-542883 | embed-certs-542883 | jenkins | v1.34.0 | 28 Oct 24 11:32 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.2 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/10/28 11:32:42
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.23.2 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1028 11:32:42.376785 1533911 out.go:345] Setting OutFile to fd 1 ...
I1028 11:32:42.376998 1533911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:32:42.377032 1533911 out.go:358] Setting ErrFile to fd 2...
I1028 11:32:42.377051 1533911 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:32:42.377437 1533911 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19876-1313708/.minikube/bin
I1028 11:32:42.378482 1533911 out.go:352] Setting JSON to false
I1028 11:32:42.379728 1533911 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":148493,"bootTime":1729966670,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I1028 11:32:42.379819 1533911 start.go:139] virtualization:
I1028 11:32:42.382343 1533911 out.go:177] * [embed-certs-542883] minikube v1.34.0 on Ubuntu 20.04 (arm64)
I1028 11:32:42.384065 1533911 out.go:177] - MINIKUBE_LOCATION=19876
I1028 11:32:42.384150 1533911 notify.go:220] Checking for updates...
I1028 11:32:42.387557 1533911 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1028 11:32:42.389706 1533911 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19876-1313708/kubeconfig
I1028 11:32:42.391924 1533911 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19876-1313708/.minikube
I1028 11:32:42.393791 1533911 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I1028 11:32:42.396204 1533911 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I1028 11:32:42.399990 1533911 config.go:182] Loaded profile config "old-k8s-version-674802": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I1028 11:32:42.400108 1533911 driver.go:394] Setting default libvirt URI to qemu:///system
I1028 11:32:42.421793 1533911 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
I1028 11:32:42.421915 1533911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1028 11:32:42.474704 1533911 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 11:32:42.464687361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1028 11:32:42.474818 1533911 docker.go:318] overlay module found
I1028 11:32:42.477367 1533911 out.go:177] * Using the docker driver based on user configuration
I1028 11:32:42.479597 1533911 start.go:297] selected driver: docker
I1028 11:32:42.479613 1533911 start.go:901] validating driver "docker" against <nil>
I1028 11:32:42.479731 1533911 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1028 11:32:42.480449 1533911 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1028 11:32:42.542358 1533911 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-28 11:32:42.533551454 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
I1028 11:32:42.542574 1533911 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I1028 11:32:42.542803 1533911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1028 11:32:42.545418 1533911 out.go:177] * Using Docker driver with root privileges
I1028 11:32:42.547774 1533911 cni.go:84] Creating CNI manager for ""
I1028 11:32:42.547842 1533911 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1028 11:32:42.547856 1533911 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I1028 11:32:42.547936 1533911 start.go:340] cluster config:
{Name:embed-certs-542883 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 11:32:42.550617 1533911 out.go:177] * Starting "embed-certs-542883" primary control-plane node in "embed-certs-542883" cluster
I1028 11:32:42.552984 1533911 cache.go:121] Beginning downloading kic base image for docker with containerd
I1028 11:32:42.555727 1533911 out.go:177] * Pulling base image v0.0.45-1729876044-19868 ...
I1028 11:32:42.558029 1533911 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1028 11:32:42.558080 1533911 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4
I1028 11:32:42.558099 1533911 cache.go:56] Caching tarball of preloaded images
I1028 11:32:42.558119 1533911 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon
I1028 11:32:42.558193 1533911 preload.go:172] Found /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I1028 11:32:42.558204 1533911 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on containerd
I1028 11:32:42.558310 1533911 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/config.json ...
I1028 11:32:42.558327 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/config.json: {Name:mk163284fb8b825a2d09aa810291bae333e1b90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:32:42.576536 1533911 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e in local docker daemon, skipping pull
I1028 11:32:42.576561 1533911 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e exists in daemon, skipping load
I1028 11:32:42.576581 1533911 cache.go:194] Successfully downloaded all kic artifacts
I1028 11:32:42.576605 1533911 start.go:360] acquireMachinesLock for embed-certs-542883: {Name:mk38179026b4a8b0728f92075de25e9a2bfe102c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 11:32:42.577170 1533911 start.go:364] duration metric: took 536.672µs to acquireMachinesLock for "embed-certs-542883"
I1028 11:32:42.577209 1533911 start.go:93] Provisioning new machine with config: &{Name:embed-certs-542883 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I1028 11:32:42.577289 1533911 start.go:125] createHost starting for "" (driver="docker")
I1028 11:32:42.581759 1533911 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
I1028 11:32:42.582155 1533911 start.go:159] libmachine.API.Create for "embed-certs-542883" (driver="docker")
I1028 11:32:42.582209 1533911 client.go:168] LocalClient.Create starting
I1028 11:32:42.582346 1533911 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem
I1028 11:32:42.582388 1533911 main.go:141] libmachine: Decoding PEM data...
I1028 11:32:42.582402 1533911 main.go:141] libmachine: Parsing certificate...
I1028 11:32:42.582500 1533911 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem
I1028 11:32:42.582557 1533911 main.go:141] libmachine: Decoding PEM data...
I1028 11:32:42.582571 1533911 main.go:141] libmachine: Parsing certificate...
I1028 11:32:42.583018 1533911 cli_runner.go:164] Run: docker network inspect embed-certs-542883 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1028 11:32:42.608805 1533911 cli_runner.go:211] docker network inspect embed-certs-542883 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1028 11:32:42.608899 1533911 network_create.go:284] running [docker network inspect embed-certs-542883] to gather additional debugging logs...
I1028 11:32:42.608916 1533911 cli_runner.go:164] Run: docker network inspect embed-certs-542883
W1028 11:32:42.623476 1533911 cli_runner.go:211] docker network inspect embed-certs-542883 returned with exit code 1
I1028 11:32:42.623505 1533911 network_create.go:287] error running [docker network inspect embed-certs-542883]: docker network inspect embed-certs-542883: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-542883 not found
I1028 11:32:42.623519 1533911 network_create.go:289] output of [docker network inspect embed-certs-542883]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-542883 not found
** /stderr **
I1028 11:32:42.623615 1533911 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 11:32:42.640450 1533911 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e8a2656e00eb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:39:ff:27:31} reservation:<nil>}
I1028 11:32:42.640917 1533911 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e05de1d17c9e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:c3:02:96} reservation:<nil>}
I1028 11:32:42.641340 1533911 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-1756b1c23cfa IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:4f:9c:02:cb} reservation:<nil>}
I1028 11:32:42.641708 1533911 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5781a9c4ca19 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:50:3d:78:b9} reservation:<nil>}
I1028 11:32:42.642254 1533911 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a184c0}
I1028 11:32:42.642301 1533911 network_create.go:124] attempt to create docker network embed-certs-542883 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I1028 11:32:42.642376 1533911 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-542883 embed-certs-542883
I1028 11:32:42.720532 1533911 network_create.go:108] docker network embed-certs-542883 192.168.85.0/24 created
I1028 11:32:42.720566 1533911 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-542883" container
I1028 11:32:42.720650 1533911 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I1028 11:32:42.735744 1533911 cli_runner.go:164] Run: docker volume create embed-certs-542883 --label name.minikube.sigs.k8s.io=embed-certs-542883 --label created_by.minikube.sigs.k8s.io=true
I1028 11:32:42.754424 1533911 oci.go:103] Successfully created a docker volume embed-certs-542883
I1028 11:32:42.754524 1533911 cli_runner.go:164] Run: docker run --rm --name embed-certs-542883-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-542883 --entrypoint /usr/bin/test -v embed-certs-542883:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -d /var/lib
I1028 11:32:43.427737 1533911 oci.go:107] Successfully prepared a docker volume embed-certs-542883
I1028 11:32:43.427783 1533911 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1028 11:32:43.427803 1533911 kic.go:194] Starting extracting preloaded images to volume ...
I1028 11:32:43.427877 1533911 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-542883:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir
I1028 11:32:45.308563 1522650 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1028 11:32:45.323590 1522650 api_server.go:72] duration metric: took 5m53.809706364s to wait for apiserver process to appear ...
I1028 11:32:45.323614 1522650 api_server.go:88] waiting for apiserver healthz status ...
I1028 11:32:45.323723 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I1028 11:32:45.323782 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1028 11:32:45.369849 1522650 cri.go:89] found id: "c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
I1028 11:32:45.369868 1522650 cri.go:89] found id: "ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
I1028 11:32:45.369873 1522650 cri.go:89] found id: ""
I1028 11:32:45.369880 1522650 logs.go:282] 2 containers: [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc]
I1028 11:32:45.369934 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.374584 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.379004 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I1028 11:32:45.379074 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I1028 11:32:45.433346 1522650 cri.go:89] found id: "6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
I1028 11:32:45.433423 1522650 cri.go:89] found id: "01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
I1028 11:32:45.433442 1522650 cri.go:89] found id: ""
I1028 11:32:45.433462 1522650 logs.go:282] 2 containers: [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232]
I1028 11:32:45.433546 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.438315 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.441971 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I1028 11:32:45.442046 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I1028 11:32:45.507419 1522650 cri.go:89] found id: "b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
I1028 11:32:45.507441 1522650 cri.go:89] found id: "2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
I1028 11:32:45.507446 1522650 cri.go:89] found id: ""
I1028 11:32:45.507453 1522650 logs.go:282] 2 containers: [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f]
I1028 11:32:45.507510 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.513603 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.517373 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I1028 11:32:45.517452 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1028 11:32:45.565346 1522650 cri.go:89] found id: "31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
I1028 11:32:45.565381 1522650 cri.go:89] found id: "857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
I1028 11:32:45.565386 1522650 cri.go:89] found id: ""
I1028 11:32:45.565393 1522650 logs.go:282] 2 containers: [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056]
I1028 11:32:45.565455 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.569124 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.572626 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I1028 11:32:45.572699 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1028 11:32:45.624046 1522650 cri.go:89] found id: "c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
I1028 11:32:45.624079 1522650 cri.go:89] found id: "8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
I1028 11:32:45.624084 1522650 cri.go:89] found id: ""
I1028 11:32:45.624091 1522650 logs.go:282] 2 containers: [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015]
I1028 11:32:45.624152 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.627765 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.631106 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I1028 11:32:45.631183 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1028 11:32:45.680421 1522650 cri.go:89] found id: "056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
I1028 11:32:45.680444 1522650 cri.go:89] found id: "4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
I1028 11:32:45.680460 1522650 cri.go:89] found id: ""
I1028 11:32:45.680468 1522650 logs.go:282] 2 containers: [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7]
I1028 11:32:45.680531 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.684137 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.687407 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I1028 11:32:45.687486 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I1028 11:32:45.741649 1522650 cri.go:89] found id: "42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
I1028 11:32:45.741671 1522650 cri.go:89] found id: "120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
I1028 11:32:45.741675 1522650 cri.go:89] found id: ""
I1028 11:32:45.741683 1522650 logs.go:282] 2 containers: [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33]
I1028 11:32:45.741741 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.745863 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.749779 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I1028 11:32:45.749843 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1028 11:32:45.801413 1522650 cri.go:89] found id: "9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
I1028 11:32:45.801471 1522650 cri.go:89] found id: ""
I1028 11:32:45.801481 1522650 logs.go:282] 1 containers: [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b]
I1028 11:32:45.801539 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.805656 1522650 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I1028 11:32:45.805718 1522650 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1028 11:32:45.904645 1522650 cri.go:89] found id: "af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
I1028 11:32:45.904670 1522650 cri.go:89] found id: "e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
I1028 11:32:45.904675 1522650 cri.go:89] found id: ""
I1028 11:32:45.904682 1522650 logs.go:282] 2 containers: [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5]
I1028 11:32:45.904738 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.910966 1522650 ssh_runner.go:195] Run: which crictl
I1028 11:32:45.917821 1522650 logs.go:123] Gathering logs for coredns [2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f] ...
I1028 11:32:45.917843 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f"
I1028 11:32:45.972711 1522650 logs.go:123] Gathering logs for kindnet [120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33] ...
I1028 11:32:45.972737 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33"
I1028 11:32:46.067160 1522650 logs.go:123] Gathering logs for kubelet ...
I1028 11:32:46.067189 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W1028 11:32:46.128234 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.736433 662 reflector.go:138] object-"default"/"default-token-rkh5t": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-rkh5t" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.128574 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.767488 662 reflector.go:138] object-"kube-system"/"metrics-server-token-bnmqq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-bnmqq" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.128790 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.769521 662 reflector.go:138] object-"kube-system"/"kindnet-token-rljtj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-rljtj" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129006 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.783592 662 reflector.go:138] object-"kube-system"/"kube-proxy-token-v6b5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-v6b5p" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129208 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786532 662 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129433 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786656 662 reflector.go:138] object-"kube-system"/"storage-provisioner-token-7s7sg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-7s7sg" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129655 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786717 662 reflector.go:138] object-"kube-system"/"coredns-token-t6lq7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-t6lq7" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.129858 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:11 old-k8s-version-674802 kubelet[662]: E1028 11:27:11.786765 662 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-674802" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-674802' and this object
W1028 11:32:46.140618 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:13 old-k8s-version-674802 kubelet[662]: E1028 11:27:13.704595 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.143017 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:14 old-k8s-version-674802 kubelet[662]: E1028 11:27:14.680234 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.145830 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:27 old-k8s-version-674802 kubelet[662]: E1028 11:27:27.405191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.148040 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:36 old-k8s-version-674802 kubelet[662]: E1028 11:27:36.774117 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.148376 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:37 old-k8s-version-674802 kubelet[662]: E1028 11:27:37.779544 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.148704 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:38 old-k8s-version-674802 kubelet[662]: E1028 11:27:38.781616 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.148892 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:39 old-k8s-version-674802 kubelet[662]: E1028 11:27:39.404833 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.149721 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:45 old-k8s-version-674802 kubelet[662]: E1028 11:27:45.806023 662 pod_workers.go:191] Error syncing pod eb6e0fb4-e030-4eb7-8b96-477de7691df6 ("storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(eb6e0fb4-e030-4eb7-8b96-477de7691df6)"
W1028 11:32:46.152611 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:51 old-k8s-version-674802 kubelet[662]: E1028 11:27:51.412111 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.153207 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:52 old-k8s-version-674802 kubelet[662]: E1028 11:27:52.828529 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.153668 1522650 logs.go:138] Found kubelet problem: Oct 28 11:27:58 old-k8s-version-674802 kubelet[662]: E1028 11:27:58.495326 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.153849 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:06 old-k8s-version-674802 kubelet[662]: E1028 11:28:06.398915 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.154173 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:11 old-k8s-version-674802 kubelet[662]: E1028 11:28:11.397340 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.154353 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:18 old-k8s-version-674802 kubelet[662]: E1028 11:28:18.397191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.154935 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:25 old-k8s-version-674802 kubelet[662]: E1028 11:28:25.930176 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.155258 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:28 old-k8s-version-674802 kubelet[662]: E1028 11:28:28.494838 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.157743 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:32 old-k8s-version-674802 kubelet[662]: E1028 11:28:32.410167 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.158074 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:39 old-k8s-version-674802 kubelet[662]: E1028 11:28:39.398920 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.158257 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:43 old-k8s-version-674802 kubelet[662]: E1028 11:28:43.399781 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.158584 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:50 old-k8s-version-674802 kubelet[662]: E1028 11:28:50.396875 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.158767 1522650 logs.go:138] Found kubelet problem: Oct 28 11:28:56 old-k8s-version-674802 kubelet[662]: E1028 11:28:56.397361 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.159090 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:02 old-k8s-version-674802 kubelet[662]: E1028 11:29:02.396819 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.159293 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:08 old-k8s-version-674802 kubelet[662]: E1028 11:29:08.398191 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.159946 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:14 old-k8s-version-674802 kubelet[662]: E1028 11:29:14.078563 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.160290 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:18 old-k8s-version-674802 kubelet[662]: E1028 11:29:18.494837 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.160481 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:22 old-k8s-version-674802 kubelet[662]: E1028 11:29:22.397401 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.160804 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:29 old-k8s-version-674802 kubelet[662]: E1028 11:29:29.397589 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.160987 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:33 old-k8s-version-674802 kubelet[662]: E1028 11:29:33.397285 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.161311 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:40 old-k8s-version-674802 kubelet[662]: E1028 11:29:40.396913 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.161495 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:45 old-k8s-version-674802 kubelet[662]: E1028 11:29:45.397674 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.161821 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:52 old-k8s-version-674802 kubelet[662]: E1028 11:29:52.396836 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.164259 1522650 logs.go:138] Found kubelet problem: Oct 28 11:29:57 old-k8s-version-674802 kubelet[662]: E1028 11:29:57.431254 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W1028 11:32:46.164587 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:06 old-k8s-version-674802 kubelet[662]: E1028 11:30:06.396711 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.164771 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:12 old-k8s-version-674802 kubelet[662]: E1028 11:30:12.397420 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.165094 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:19 old-k8s-version-674802 kubelet[662]: E1028 11:30:19.397829 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.165276 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:23 old-k8s-version-674802 kubelet[662]: E1028 11:30:23.400863 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.165874 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:35 old-k8s-version-674802 kubelet[662]: E1028 11:30:35.284773 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.166056 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:37 old-k8s-version-674802 kubelet[662]: E1028 11:30:37.398847 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.166378 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:38 old-k8s-version-674802 kubelet[662]: E1028 11:30:38.494821 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.166560 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:52 old-k8s-version-674802 kubelet[662]: E1028 11:30:52.398247 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.166884 1522650 logs.go:138] Found kubelet problem: Oct 28 11:30:53 old-k8s-version-674802 kubelet[662]: E1028 11:30:53.397067 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.167207 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.396824 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.167388 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:04 old-k8s-version-674802 kubelet[662]: E1028 11:31:04.398636 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.167725 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397153 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.167908 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:18 old-k8s-version-674802 kubelet[662]: E1028 11:31:18.397448 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.168236 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:30 old-k8s-version-674802 kubelet[662]: E1028 11:31:30.396768 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.168418 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:32 old-k8s-version-674802 kubelet[662]: E1028 11:31:32.397834 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.168745 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: E1028 11:31:42.396795 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.168925 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:47 old-k8s-version-674802 kubelet[662]: E1028 11:31:47.398090 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.169249 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: E1028 11:31:56.396772 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.169434 1522650 logs.go:138] Found kubelet problem: Oct 28 11:31:58 old-k8s-version-674802 kubelet[662]: E1028 11:31:58.397362 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.169758 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.169942 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.170339 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.170526 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:46.170854 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:46.173305 1522650 logs.go:138] Found kubelet problem: Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I1028 11:32:46.173316 1522650 logs.go:123] Gathering logs for etcd [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148] ...
I1028 11:32:46.173330 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148"
I1028 11:32:46.223969 1522650 logs.go:123] Gathering logs for kube-proxy [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf] ...
I1028 11:32:46.224006 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf"
I1028 11:32:46.277259 1522650 logs.go:123] Gathering logs for kubernetes-dashboard [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b] ...
I1028 11:32:46.277289 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b"
I1028 11:32:46.337485 1522650 logs.go:123] Gathering logs for storage-provisioner [e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5] ...
I1028 11:32:46.337520 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5"
I1028 11:32:46.395372 1522650 logs.go:123] Gathering logs for container status ...
I1028 11:32:46.395422 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1028 11:32:46.450094 1522650 logs.go:123] Gathering logs for describe nodes ...
I1028 11:32:46.450127 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1028 11:32:46.647254 1522650 logs.go:123] Gathering logs for kube-scheduler [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824] ...
I1028 11:32:46.647867 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824"
I1028 11:32:46.698393 1522650 logs.go:123] Gathering logs for kube-controller-manager [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438] ...
I1028 11:32:46.698421 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438"
I1028 11:32:46.756978 1522650 logs.go:123] Gathering logs for kube-controller-manager [4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7] ...
I1028 11:32:46.757015 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7"
I1028 11:32:46.831122 1522650 logs.go:123] Gathering logs for kindnet [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3] ...
I1028 11:32:46.831163 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3"
I1028 11:32:46.880307 1522650 logs.go:123] Gathering logs for etcd [01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232] ...
I1028 11:32:46.880340 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232"
I1028 11:32:46.936132 1522650 logs.go:123] Gathering logs for kube-scheduler [857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056] ...
I1028 11:32:46.936165 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056"
I1028 11:32:46.982104 1522650 logs.go:123] Gathering logs for kube-apiserver [ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc] ...
I1028 11:32:46.982133 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc"
I1028 11:32:47.048875 1522650 logs.go:123] Gathering logs for coredns [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59] ...
I1028 11:32:47.048911 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59"
I1028 11:32:47.093129 1522650 logs.go:123] Gathering logs for kube-proxy [8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015] ...
I1028 11:32:47.093157 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015"
I1028 11:32:47.132824 1522650 logs.go:123] Gathering logs for storage-provisioner [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8] ...
I1028 11:32:47.132849 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8"
I1028 11:32:47.172011 1522650 logs.go:123] Gathering logs for containerd ...
I1028 11:32:47.172037 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I1028 11:32:47.239434 1522650 logs.go:123] Gathering logs for dmesg ...
I1028 11:32:47.239469 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1028 11:32:47.257467 1522650 logs.go:123] Gathering logs for kube-apiserver [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6] ...
I1028 11:32:47.257498 1522650 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6"
I1028 11:32:47.317252 1522650 out.go:358] Setting ErrFile to fd 2...
I1028 11:32:47.317286 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
W1028 11:32:47.317350 1522650 out.go:270] X Problems detected in kubelet:
W1028 11:32:47.317369 1522650 out.go:270] Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:47.317384 1522650 out.go:270] Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:47.317397 1522650 out.go:270] Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W1028 11:32:47.317405 1522650 out.go:270] Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
W1028 11:32:47.317419 1522650 out.go:270] Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
I1028 11:32:47.317439 1522650 out.go:358] Setting ErrFile to fd 2...
I1028 11:32:47.317446 1522650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1028 11:32:48.132711 1533911 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19876-1313708/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-542883:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e -I lz4 -xf /preloaded.tar -C /extractDir: (4.704798769s)
I1028 11:32:48.132743 1533911 kic.go:203] duration metric: took 4.704935744s to extract preloaded images to volume ...
W1028 11:32:48.132879 1533911 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1028 11:32:48.132988 1533911 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I1028 11:32:48.191526 1533911 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-542883 --name embed-certs-542883 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-542883 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-542883 --network embed-certs-542883 --ip 192.168.85.2 --volume embed-certs-542883:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e
I1028 11:32:48.518235 1533911 cli_runner.go:164] Run: docker container inspect embed-certs-542883 --format={{.State.Running}}
I1028 11:32:48.536429 1533911 cli_runner.go:164] Run: docker container inspect embed-certs-542883 --format={{.State.Status}}
I1028 11:32:48.560183 1533911 cli_runner.go:164] Run: docker exec embed-certs-542883 stat /var/lib/dpkg/alternatives/iptables
I1028 11:32:48.632490 1533911 oci.go:144] the created container "embed-certs-542883" has a running status.
I1028 11:32:48.632524 1533911 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa...
I1028 11:32:49.179713 1533911 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1028 11:32:49.207087 1533911 cli_runner.go:164] Run: docker container inspect embed-certs-542883 --format={{.State.Status}}
I1028 11:32:49.229265 1533911 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1028 11:32:49.229289 1533911 kic_runner.go:114] Args: [docker exec --privileged embed-certs-542883 chown docker:docker /home/docker/.ssh/authorized_keys]
I1028 11:32:49.339865 1533911 cli_runner.go:164] Run: docker container inspect embed-certs-542883 --format={{.State.Status}}
I1028 11:32:49.367082 1533911 machine.go:93] provisionDockerMachine start ...
I1028 11:32:49.367181 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
I1028 11:32:49.386731 1533911 main.go:141] libmachine: Using SSH client type: native
I1028 11:32:49.387020 1533911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 40385 <nil> <nil>}
I1028 11:32:49.387038 1533911 main.go:141] libmachine: About to run SSH command:
hostname
I1028 11:32:49.541148 1533911 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-542883
I1028 11:32:49.541190 1533911 ubuntu.go:169] provisioning hostname "embed-certs-542883"
I1028 11:32:49.541262 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
I1028 11:32:49.558765 1533911 main.go:141] libmachine: Using SSH client type: native
I1028 11:32:49.559012 1533911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 40385 <nil> <nil>}
I1028 11:32:49.559030 1533911 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-542883 && echo "embed-certs-542883" | sudo tee /etc/hostname
I1028 11:32:49.722988 1533911 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-542883
I1028 11:32:49.723130 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
I1028 11:32:49.750433 1533911 main.go:141] libmachine: Using SSH client type: native
I1028 11:32:49.750748 1533911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x415580] 0x417dc0 <nil> [] 0s} 127.0.0.1 40385 <nil> <nil>}
I1028 11:32:49.750781 1533911 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-542883' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-542883/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-542883' | sudo tee -a /etc/hosts;
fi
fi
I1028 11:32:49.880363 1533911 main.go:141] libmachine: SSH cmd err, output: <nil>:
I1028 11:32:49.880438 1533911 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19876-1313708/.minikube CaCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19876-1313708/.minikube}
I1028 11:32:49.880496 1533911 ubuntu.go:177] setting up certificates
I1028 11:32:49.880531 1533911 provision.go:84] configureAuth start
I1028 11:32:49.880628 1533911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-542883
I1028 11:32:49.902650 1533911 provision.go:143] copyHostCerts
I1028 11:32:49.902709 1533911 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem, removing ...
I1028 11:32:49.902719 1533911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem
I1028 11:32:49.902793 1533911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.pem (1078 bytes)
I1028 11:32:49.902878 1533911 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem, removing ...
I1028 11:32:49.902883 1533911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem
I1028 11:32:49.902907 1533911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/cert.pem (1123 bytes)
I1028 11:32:49.902960 1533911 exec_runner.go:144] found /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem, removing ...
I1028 11:32:49.902965 1533911 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem
I1028 11:32:49.902986 1533911 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19876-1313708/.minikube/key.pem (1675 bytes)
I1028 11:32:49.903037 1533911 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem org=jenkins.embed-certs-542883 san=[127.0.0.1 192.168.85.2 embed-certs-542883 localhost minikube]
I1028 11:32:50.055296 1533911 provision.go:177] copyRemoteCerts
I1028 11:32:50.055373 1533911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1028 11:32:50.055423 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
I1028 11:32:50.071940 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
I1028 11:32:50.165089 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I1028 11:32:50.193238 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
I1028 11:32:50.219000 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1028 11:32:50.245276 1533911 provision.go:87] duration metric: took 364.718072ms to configureAuth
I1028 11:32:50.245350 1533911 ubuntu.go:193] setting minikube options for container-runtime
I1028 11:32:50.245560 1533911 config.go:182] Loaded profile config "embed-certs-542883": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.2
I1028 11:32:50.245576 1533911 machine.go:96] duration metric: took 878.470456ms to provisionDockerMachine
I1028 11:32:50.245584 1533911 client.go:171] duration metric: took 7.663368273s to LocalClient.Create
I1028 11:32:50.245613 1533911 start.go:167] duration metric: took 7.663459974s to libmachine.API.Create "embed-certs-542883"
I1028 11:32:50.245624 1533911 start.go:293] postStartSetup for "embed-certs-542883" (driver="docker")
I1028 11:32:50.245634 1533911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1028 11:32:50.245699 1533911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1028 11:32:50.245743 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
I1028 11:32:50.263513 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
I1028 11:32:50.360793 1533911 ssh_runner.go:195] Run: cat /etc/os-release
I1028 11:32:50.363963 1533911 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1028 11:32:50.364009 1533911 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1028 11:32:50.364020 1533911 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1028 11:32:50.364027 1533911 info.go:137] Remote host: Ubuntu 22.04.5 LTS
I1028 11:32:50.364041 1533911 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-1313708/.minikube/addons for local assets ...
I1028 11:32:50.364099 1533911 filesync.go:126] Scanning /home/jenkins/minikube-integration/19876-1313708/.minikube/files for local assets ...
I1028 11:32:50.364185 1533911 filesync.go:149] local asset: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem -> 13190982.pem in /etc/ssl/certs
I1028 11:32:50.364299 1533911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I1028 11:32:50.373027 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem --> /etc/ssl/certs/13190982.pem (1708 bytes)
I1028 11:32:50.402964 1533911 start.go:296] duration metric: took 157.325869ms for postStartSetup
I1028 11:32:50.403338 1533911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-542883
I1028 11:32:50.420444 1533911 profile.go:143] Saving config to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/config.json ...
I1028 11:32:50.420742 1533911 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1028 11:32:50.420795 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
I1028 11:32:50.439003 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
I1028 11:32:50.528289 1533911 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I1028 11:32:50.532792 1533911 start.go:128] duration metric: took 7.955483107s to createHost
I1028 11:32:50.532817 1533911 start.go:83] releasing machines lock for "embed-certs-542883", held for 7.955628969s
I1028 11:32:50.532887 1533911 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-542883
I1028 11:32:50.548916 1533911 ssh_runner.go:195] Run: cat /version.json
I1028 11:32:50.548987 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
I1028 11:32:50.548916 1533911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I1028 11:32:50.549117 1533911 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-542883
I1028 11:32:50.572058 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
I1028 11:32:50.573233 1533911 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:40385 SSHKeyPath:/home/jenkins/minikube-integration/19876-1313708/.minikube/machines/embed-certs-542883/id_rsa Username:docker}
I1028 11:32:50.796632 1533911 ssh_runner.go:195] Run: systemctl --version
I1028 11:32:50.800971 1533911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I1028 11:32:50.805119 1533911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I1028 11:32:50.830749 1533911 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I1028 11:32:50.830829 1533911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I1028 11:32:50.857686 1533911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I1028 11:32:50.857712 1533911 start.go:495] detecting cgroup driver to use...
I1028 11:32:50.857768 1533911 detect.go:187] detected "cgroupfs" cgroup driver on host os
I1028 11:32:50.857835 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I1028 11:32:50.869945 1533911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I1028 11:32:50.881779 1533911 docker.go:217] disabling cri-docker service (if available) ...
I1028 11:32:50.881842 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I1028 11:32:50.895382 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I1028 11:32:50.914962 1533911 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I1028 11:32:51.006677 1533911 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I1028 11:32:51.109510 1533911 docker.go:233] disabling docker service ...
I1028 11:32:51.109629 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I1028 11:32:51.131884 1533911 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I1028 11:32:51.144891 1533911 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I1028 11:32:51.236896 1533911 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I1028 11:32:51.321851 1533911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I1028 11:32:51.333782 1533911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I1028 11:32:51.350766 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
I1028 11:32:51.361252 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I1028 11:32:51.371723 1533911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I1028 11:32:51.371837 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I1028 11:32:51.382019 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1028 11:32:51.392182 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I1028 11:32:51.407931 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I1028 11:32:51.431278 1533911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I1028 11:32:51.441203 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I1028 11:32:51.451416 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I1028 11:32:51.462214 1533911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I1028 11:32:51.473201 1533911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1028 11:32:51.483086 1533911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1028 11:32:51.492521 1533911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 11:32:51.586221 1533911 ssh_runner.go:195] Run: sudo systemctl restart containerd
I1028 11:32:51.744597 1533911 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I1028 11:32:51.744720 1533911 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I1028 11:32:51.748313 1533911 start.go:563] Will wait 60s for crictl version
I1028 11:32:51.748395 1533911 ssh_runner.go:195] Run: which crictl
I1028 11:32:51.751653 1533911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I1028 11:32:51.792849 1533911 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.22
RuntimeApiVersion: v1
I1028 11:32:51.792932 1533911 ssh_runner.go:195] Run: containerd --version
I1028 11:32:51.818888 1533911 ssh_runner.go:195] Run: containerd --version
I1028 11:32:51.843993 1533911 out.go:177] * Preparing Kubernetes v1.31.2 on containerd 1.7.22 ...
I1028 11:32:51.845426 1533911 cli_runner.go:164] Run: docker network inspect embed-certs-542883 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1028 11:32:51.861318 1533911 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I1028 11:32:51.865183 1533911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1028 11:32:51.876032 1533911 kubeadm.go:883] updating cluster {Name:embed-certs-542883 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I1028 11:32:51.876165 1533911 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime containerd
I1028 11:32:51.876233 1533911 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:32:51.918199 1533911 containerd.go:627] all images are preloaded for containerd runtime.
I1028 11:32:51.918222 1533911 containerd.go:534] Images already preloaded, skipping extraction
I1028 11:32:51.918282 1533911 ssh_runner.go:195] Run: sudo crictl images --output json
I1028 11:32:51.961431 1533911 containerd.go:627] all images are preloaded for containerd runtime.
I1028 11:32:51.961454 1533911 cache_images.go:84] Images are preloaded, skipping loading
I1028 11:32:51.961462 1533911 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.2 containerd true true} ...
I1028 11:32:51.961557 1533911 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-542883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I1028 11:32:51.961623 1533911 ssh_runner.go:195] Run: sudo crictl info
I1028 11:32:52.000108 1533911 cni.go:84] Creating CNI manager for ""
I1028 11:32:52.000130 1533911 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I1028 11:32:52.000140 1533911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I1028 11:32:52.000161 1533911 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-542883 NodeName:embed-certs-542883 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I1028 11:32:52.000279 1533911 kubeadm.go:195] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-542883"
kubeletExtraArgs:
- name: "node-ip"
value: "192.168.85.2"
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
- name: "enable-admission-plugins"
value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
- name: "allocate-node-cidrs"
value: "true"
- name: "leader-elect"
value: "false"
scheduler:
extraArgs:
- name: "leader-elect"
value: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
- name: "proxy-refresh-interval"
value: "70000"
kubernetesVersion: v1.31.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1028 11:32:52.000342 1533911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
I1028 11:32:52.010471 1533911 binaries.go:44] Found k8s binaries, skipping transfer
I1028 11:32:52.010549 1533911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1028 11:32:52.020129 1533911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I1028 11:32:52.039264 1533911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1028 11:32:52.058518 1533911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
I1028 11:32:52.078039 1533911 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I1028 11:32:52.081556 1533911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1028 11:32:52.093155 1533911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I1028 11:32:52.194918 1533911 ssh_runner.go:195] Run: sudo systemctl start kubelet
I1028 11:32:52.211459 1533911 certs.go:68] Setting up /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883 for IP: 192.168.85.2
I1028 11:32:52.211531 1533911 certs.go:194] generating shared ca certs ...
I1028 11:32:52.211562 1533911 certs.go:226] acquiring lock for ca certs: {Name:mk0d3ceca6221298cea760035b38b9c704e7b693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:32:52.211776 1533911 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.key
I1028 11:32:52.211849 1533911 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.key
I1028 11:32:52.211871 1533911 certs.go:256] generating profile certs ...
I1028 11:32:52.211964 1533911 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.key
I1028 11:32:52.212000 1533911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.crt with IP's: []
I1028 11:32:52.482100 1533911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.crt ...
I1028 11:32:52.482134 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.crt: {Name:mkc2100167cd18b06b84ef0e3a475a22f1be0b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:32:52.482342 1533911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.key ...
I1028 11:32:52.482358 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/client.key: {Name:mkb2837f1d77020aff5cdda4d8ea3d30bc7fb871 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:32:52.483040 1533911 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key.6c26fc4d
I1028 11:32:52.483097 1533911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt.6c26fc4d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I1028 11:32:53.160851 1533911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt.6c26fc4d ...
I1028 11:32:53.160886 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt.6c26fc4d: {Name:mk0715493b6d379c08fd8c18774148895c639a0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:32:53.161555 1533911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key.6c26fc4d ...
I1028 11:32:53.161577 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key.6c26fc4d: {Name:mke7826bac1f5ff37de405e1ec9c1b4350078356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:32:53.161712 1533911 certs.go:381] copying /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt.6c26fc4d -> /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt
I1028 11:32:53.161841 1533911 certs.go:385] copying /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key.6c26fc4d -> /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key
I1028 11:32:53.161931 1533911 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.key
I1028 11:32:53.161968 1533911 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.crt with IP's: []
I1028 11:32:53.452656 1533911 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.crt ...
I1028 11:32:53.452687 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.crt: {Name:mkae14fb0ab70a2d610d7f9bd3223f3e822792ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:32:53.452921 1533911 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.key ...
I1028 11:32:53.452939 1533911 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.key: {Name:mk597dc519e482e4de044bfd06cfa6289329f33c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1028 11:32:53.453819 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098.pem (1338 bytes)
W1028 11:32:53.453866 1533911 certs.go:480] ignoring /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098_empty.pem, impossibly tiny 0 bytes
I1028 11:32:53.453883 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca-key.pem (1675 bytes)
I1028 11:32:53.453908 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/ca.pem (1078 bytes)
I1028 11:32:53.453934 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/cert.pem (1123 bytes)
I1028 11:32:53.453961 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/key.pem (1675 bytes)
I1028 11:32:53.454012 1533911 certs.go:484] found cert: /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem (1708 bytes)
I1028 11:32:53.454625 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1028 11:32:53.480099 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1028 11:32:53.505222 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1028 11:32:53.529431 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1028 11:32:53.557767 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I1028 11:32:53.582376 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I1028 11:32:53.606975 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1028 11:32:53.633689 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/profiles/embed-certs-542883/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1028 11:32:53.677702 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/certs/1319098.pem --> /usr/share/ca-certificates/1319098.pem (1338 bytes)
I1028 11:32:53.705568 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/files/etc/ssl/certs/13190982.pem --> /usr/share/ca-certificates/13190982.pem (1708 bytes)
I1028 11:32:53.735856 1533911 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19876-1313708/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1028 11:32:53.764931 1533911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1028 11:32:53.785339 1533911 ssh_runner.go:195] Run: openssl version
I1028 11:32:53.792240 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1319098.pem && ln -fs /usr/share/ca-certificates/1319098.pem /etc/ssl/certs/1319098.pem"
I1028 11:32:53.804696 1533911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1319098.pem
I1028 11:32:53.809144 1533911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 28 10:48 /usr/share/ca-certificates/1319098.pem
I1028 11:32:53.809212 1533911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1319098.pem
I1028 11:32:53.817244 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1319098.pem /etc/ssl/certs/51391683.0"
I1028 11:32:53.828886 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13190982.pem && ln -fs /usr/share/ca-certificates/13190982.pem /etc/ssl/certs/13190982.pem"
I1028 11:32:53.839915 1533911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13190982.pem
I1028 11:32:53.844260 1533911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 28 10:48 /usr/share/ca-certificates/13190982.pem
I1028 11:32:53.844323 1533911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13190982.pem
I1028 11:32:53.852009 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13190982.pem /etc/ssl/certs/3ec20f2e.0"
I1028 11:32:53.874601 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1028 11:32:53.891728 1533911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1028 11:32:53.907980 1533911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 28 10:41 /usr/share/ca-certificates/minikubeCA.pem
I1028 11:32:53.908053 1533911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1028 11:32:53.930015 1533911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1028 11:32:53.953413 1533911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I1028 11:32:53.967514 1533911 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I1028 11:32:53.967569 1533911 kubeadm.go:392] StartCluster: {Name:embed-certs-542883 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1729876044-19868@sha256:98fb05d9d766cf8d630ce90381e12faa07711c611be9bb2c767cfc936533477e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-542883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1028 11:32:53.967738 1533911 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I1028 11:32:53.967797 1533911 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1028 11:32:54.037681 1533911 cri.go:89] found id: ""
I1028 11:32:54.037751 1533911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1028 11:32:54.048590 1533911 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1028 11:32:54.057684 1533911 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I1028 11:32:54.057759 1533911 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1028 11:32:54.067182 1533911 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1028 11:32:54.067200 1533911 kubeadm.go:157] found existing configuration files:
I1028 11:32:54.067260 1533911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I1028 11:32:54.076439 1533911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I1028 11:32:54.076524 1533911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I1028 11:32:54.085394 1533911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I1028 11:32:54.094624 1533911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I1028 11:32:54.094695 1533911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I1028 11:32:54.103513 1533911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I1028 11:32:54.112477 1533911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I1028 11:32:54.112543 1533911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I1028 11:32:54.121065 1533911 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I1028 11:32:54.130065 1533911 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I1028 11:32:54.130129 1533911 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I1028 11:32:54.138542 1533911 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1028 11:32:54.182294 1533911 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
I1028 11:32:54.182373 1533911 kubeadm.go:310] [preflight] Running pre-flight checks
I1028 11:32:54.210387 1533911 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I1028 11:32:54.210481 1533911 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1071-aws[0m
I1028 11:32:54.210536 1533911 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I1028 11:32:54.210598 1533911 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I1028 11:32:54.210665 1533911 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I1028 11:32:54.210729 1533911 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I1028 11:32:54.210792 1533911 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I1028 11:32:54.210858 1533911 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I1028 11:32:54.210923 1533911 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I1028 11:32:54.210985 1533911 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I1028 11:32:54.211048 1533911 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I1028 11:32:54.211109 1533911 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I1028 11:32:54.275030 1533911 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I1028 11:32:54.275191 1533911 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I1028 11:32:54.275327 1533911 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
I1028 11:32:54.281392 1533911 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1028 11:32:57.318433 1522650 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I1028 11:32:57.330588 1522650 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I1028 11:32:57.332171 1522650 out.go:201]
W1028 11:32:57.333624 1522650 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W1028 11:32:57.333844 1522650 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W1028 11:32:57.333985 1522650 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W1028 11:32:57.334048 1522650 out.go:270] *
W1028 11:32:57.335332 1522650 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1028 11:32:57.337475 1522650 out.go:201]
I1028 11:32:54.283975 1533911 out.go:235] - Generating certificates and keys ...
I1028 11:32:54.284086 1533911 kubeadm.go:310] [certs] Using existing ca certificate authority
I1028 11:32:54.284154 1533911 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I1028 11:32:55.599535 1533911 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I1028 11:32:56.197181 1533911 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I1028 11:32:56.714502 1533911 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
be9f8802d8916 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 5345e4b44a194 dashboard-metrics-scraper-8d5bb5db8-8ft4v
af354fdce961d ba04bb24b9575 5 minutes ago Running storage-provisioner 2 348661137a892 storage-provisioner
9666309986efc 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 814b17aa028a6 kubernetes-dashboard-cd95d586-v2szp
a4e428255f3fd 1611cd07b61d5 5 minutes ago Running busybox 1 e55de8161dcff busybox
b864ea5367f07 db91994f4ee8f 5 minutes ago Running coredns 1 9cc76b05b8d1c coredns-74ff55c5b-wlp24
42478c583a7df 0bcd66b03df5f 5 minutes ago Running kindnet-cni 1 38356644fc1c8 kindnet-njzd8
c0ed41137fbff 25a5233254979 5 minutes ago Running kube-proxy 1 9ac5a70069144 kube-proxy-sdcls
e4aa22206b37d ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 348661137a892 storage-provisioner
056d20453e357 1df8a2b116bd1 5 minutes ago Running kube-controller-manager 1 c36c0bc4cda7d kube-controller-manager-old-k8s-version-674802
31281b2de0e80 e7605f88f17d6 5 minutes ago Running kube-scheduler 1 0f570043b6027 kube-scheduler-old-k8s-version-674802
c02d779e69c4a 2c08bbbc02d3a 5 minutes ago Running kube-apiserver 1 be3ff56e02e17 kube-apiserver-old-k8s-version-674802
6208543cc8b3c 05b738aa1bc63 5 minutes ago Running etcd 1 bd060b6e7fd72 etcd-old-k8s-version-674802
4ca77cb193da9 1611cd07b61d5 6 minutes ago Exited busybox 0 ffc76deef9cf5 busybox
2a9df06520f73 db91994f4ee8f 7 minutes ago Exited coredns 0 9aeed8465d616 coredns-74ff55c5b-wlp24
120e0085c59b7 0bcd66b03df5f 7 minutes ago Exited kindnet-cni 0 701b8b812804d kindnet-njzd8
8d4b3dad3dd90 25a5233254979 7 minutes ago Exited kube-proxy 0 8f0e39e75e045 kube-proxy-sdcls
4937ca78533bb 1df8a2b116bd1 8 minutes ago Exited kube-controller-manager 0 dbec23a98a1c6 kube-controller-manager-old-k8s-version-674802
ba54ab63823c2 2c08bbbc02d3a 8 minutes ago Exited kube-apiserver 0 cb8fcd27e6db8 kube-apiserver-old-k8s-version-674802
01a108b46e6f4 05b738aa1bc63 8 minutes ago Exited etcd 0 a7ae75ef9e42a etcd-old-k8s-version-674802
857580d96023b e7605f88f17d6 8 minutes ago Exited kube-scheduler 0 81b2f8da04b4f kube-scheduler-old-k8s-version-674802
==> containerd <==
Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.425316785Z" level=info msg="CreateContainer within sandbox \"5345e4b44a19483b63b20f1608ff31e77b765e63476661d06091c7f5730ef7db\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\""
Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.425970470Z" level=info msg="StartContainer for \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\""
Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.501874834Z" level=info msg="StartContainer for \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\" returns successfully"
Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.528713123Z" level=info msg="shim disconnected" id=7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299 namespace=k8s.io
Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.528772856Z" level=warning msg="cleaning up after shim disconnected" id=7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299 namespace=k8s.io
Oct 28 11:29:13 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:13.528785295Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 28 11:29:14 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:14.080153419Z" level=info msg="RemoveContainer for \"309db5e97f5f2a99eeabfc6729f149f2248af8f014e3018a65087b5b753d7dfd\""
Oct 28 11:29:14 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:14.086338887Z" level=info msg="RemoveContainer for \"309db5e97f5f2a99eeabfc6729f149f2248af8f014e3018a65087b5b753d7dfd\" returns successfully"
Oct 28 11:29:57 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:57.417150504Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:29:57 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:57.428820503Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Oct 28 11:29:57 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:57.430735487Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Oct 28 11:29:57 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:29:57.430827950Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.400193598Z" level=info msg="CreateContainer within sandbox \"5345e4b44a19483b63b20f1608ff31e77b765e63476661d06091c7f5730ef7db\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.420739620Z" level=info msg="CreateContainer within sandbox \"5345e4b44a19483b63b20f1608ff31e77b765e63476661d06091c7f5730ef7db\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b\""
Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.421304239Z" level=info msg="StartContainer for \"be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b\""
Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.493952705Z" level=info msg="StartContainer for \"be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b\" returns successfully"
Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.519533658Z" level=info msg="shim disconnected" id=be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b namespace=k8s.io
Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.519605420Z" level=warning msg="cleaning up after shim disconnected" id=be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b namespace=k8s.io
Oct 28 11:30:34 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:34.519617129Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Oct 28 11:30:35 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:35.286855726Z" level=info msg="RemoveContainer for \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\""
Oct 28 11:30:35 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:30:35.294109410Z" level=info msg="RemoveContainer for \"7631bc5614df611ecf9e62992d7401e688390e5ca64417c5655b6c707c841299\" returns successfully"
Oct 28 11:32:39 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:32:39.398293323Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:39 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:32:39.416163075Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Oct 28 11:32:39 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:32:39.418036771Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Oct 28 11:32:39 old-k8s-version-674802 containerd[568]: time="2024-10-28T11:32:39.418255979Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [2a9df06520f732f1766508da84b61f745cb047b5f7bcf5bf3ef9cb3891f6239f] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:54691 - 59512 "HINFO IN 701964382770757036.1622313577083864860. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016523145s
==> coredns [b864ea5367f07235e01b7c4c4545bda20ba5924d99b8e542c0315227a77c2c59] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:37801 - 13344 "HINFO IN 981161145988552116.1207215888426285145. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.036677021s
==> describe nodes <==
Name: old-k8s-version-674802
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-674802
kubernetes.io/os=linux
minikube.k8s.io/commit=605803b196d1455ad0982199aad6722d11920536
minikube.k8s.io/name=old-k8s-version-674802
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_10_28T11_24_43_0700
minikube.k8s.io/version=v1.34.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Mon, 28 Oct 2024 11:24:39 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-674802
AcquireTime: <unset>
RenewTime: Mon, 28 Oct 2024 11:32:54 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Mon, 28 Oct 2024 11:28:04 +0000 Mon, 28 Oct 2024 11:24:33 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 28 Oct 2024 11:28:04 +0000 Mon, 28 Oct 2024 11:24:33 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 28 Oct 2024 11:28:04 +0000 Mon, 28 Oct 2024 11:24:33 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 28 Oct 2024 11:28:04 +0000 Mon, 28 Oct 2024 11:24:58 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-674802
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022300Ki
pods: 110
System Info:
Machine ID: 647d9b2317cf4e92bba105056215e984
System UUID: 20b2357e-356d-46a8-b586-d57348d369c5
Boot ID: 7206fba0-79a5-434d-956e-eb6133d7b735
Kernel Version: 5.15.0-1071-aws
OS Image: Ubuntu 22.04.5 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.22
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m40s
kube-system coredns-74ff55c5b-wlp24 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 8m1s
kube-system etcd-old-k8s-version-674802 100m (5%) 0 (0%) 100Mi (1%) 0 (0%) 8m8s
kube-system kindnet-njzd8 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 8m1s
kube-system kube-apiserver-old-k8s-version-674802 250m (12%) 0 (0%) 0 (0%) 0 (0%) 8m8s
kube-system kube-controller-manager-old-k8s-version-674802 200m (10%) 0 (0%) 0 (0%) 0 (0%) 8m8s
kube-system kube-proxy-sdcls 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m1s
kube-system kube-scheduler-old-k8s-version-674802 100m (5%) 0 (0%) 0 (0%) 0 (0%) 8m8s
kube-system metrics-server-9975d5f86-lv8qx 100m (5%) 0 (0%) 200Mi (2%) 0 (0%) 6m29s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 8m
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-8ft4v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m29s
kubernetes-dashboard kubernetes-dashboard-cd95d586-v2szp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m29s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%) 100m (5%)
memory 420Mi (5%) 220Mi (2%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
hugepages-32Mi 0 (0%) 0 (0%)
hugepages-64Ki 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 8m27s (x5 over 8m28s) kubelet Node old-k8s-version-674802 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m27s (x4 over 8m28s) kubelet Node old-k8s-version-674802 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m27s (x4 over 8m28s) kubelet Node old-k8s-version-674802 status is now: NodeHasSufficientPID
Normal Starting 8m9s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m8s kubelet Node old-k8s-version-674802 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m8s kubelet Node old-k8s-version-674802 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m8s kubelet Node old-k8s-version-674802 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m8s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m1s kubelet Node old-k8s-version-674802 status is now: NodeReady
Normal Starting 8m kube-proxy Starting kube-proxy.
Normal Starting 6m kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m (x8 over 6m) kubelet Node old-k8s-version-674802 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m (x8 over 6m) kubelet Node old-k8s-version-674802 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m (x7 over 6m) kubelet Node old-k8s-version-674802 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m kubelet Updated Node Allocatable limit across pods
Normal Starting 5m45s kube-proxy Starting kube-proxy.
==> dmesg <==
[Oct28 09:59] systemd-journald[220]: Failed to send stream file descriptor to service manager: Connection refused
[Oct28 10:02] overlayfs: '/var/lib/containers/storage/overlay/l/ZLTOCNGE2IGM6DT7VP2QP7OV3M' not a directory
[ +0.673094] overlayfs: '/var/lib/containers/storage/overlay/l/Q2QJNMTVZL6GMULS36RA5ZJGSA' not a directory
==> etcd [01a108b46e6f4f9217c1f90a9611bdbc7956ad16edbfd8093ad46cc6ef34b232] <==
2024-10-28 11:24:32.750749 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
2024-10-28 11:24:32.750796 I | embed: listening for peers on 192.168.76.2:2380
raft2024/10/28 11:24:33 INFO: ea7e25599daad906 is starting a new election at term 1
raft2024/10/28 11:24:33 INFO: ea7e25599daad906 became candidate at term 2
raft2024/10/28 11:24:33 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2024/10/28 11:24:33 INFO: ea7e25599daad906 became leader at term 2
raft2024/10/28 11:24:33 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2024-10-28 11:24:33.415771 I | etcdserver: setting up the initial cluster version to 3.4
2024-10-28 11:24:33.416061 I | etcdserver: published {Name:old-k8s-version-674802 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2024-10-28 11:24:33.416199 I | embed: ready to serve client requests
2024-10-28 11:24:33.420512 I | embed: serving client requests on 192.168.76.2:2379
2024-10-28 11:24:33.425385 I | embed: ready to serve client requests
2024-10-28 11:24:33.433780 N | etcdserver/membership: set the initial cluster version to 3.4
2024-10-28 11:24:33.435652 I | etcdserver/api: enabled capabilities for version 3.4
2024-10-28 11:24:33.435921 I | embed: serving client requests on 127.0.0.1:2379
2024-10-28 11:24:59.142145 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:25:03.607852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:25:13.607862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:25:23.607852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:25:33.607968 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:25:43.607887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:25:53.607917 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:26:03.607848 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:26:13.609723 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:26:23.608181 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [6208543cc8b3c7edcccd800e0f9d98e845390bf870426de3226d81781dce3148] <==
2024-10-28 11:28:49.621294 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:28:59.621366 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:29:09.621202 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:29:19.621350 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:29:29.621333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:29:39.621293 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:29:49.621215 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:29:59.621398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:30:09.621298 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:30:19.621454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:30:29.621382 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:30:39.621283 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:30:49.621351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:30:59.621468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:31:09.621231 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:31:19.621334 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:31:29.621347 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:31:39.621198 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:31:49.621345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:31:59.621832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:32:09.621268 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:32:19.621480 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:32:29.621312 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:32:39.621760 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-10-28 11:32:49.624027 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
11:32:59 up 1 day, 17:15, 0 users, load average: 1.39, 1.66, 2.29
Linux old-k8s-version-674802 5.15.0-1071-aws #77~20.04.1-Ubuntu SMP Thu Oct 3 19:34:36 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.5 LTS"
==> kindnet [120e0085c59b7ce7fd3c7afbb14ea7637d4c18b660f3d35631be06f9007e3a33] <==
I1028 11:25:01.828559 1 main.go:148] setting mtu 1500 for CNI
I1028 11:25:01.828572 1 main.go:178] kindnetd IP family: "ipv4"
I1028 11:25:01.828585 1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
I1028 11:25:02.130829 1 controller.go:338] Starting controller kube-network-policies
I1028 11:25:02.131003 1 controller.go:342] Waiting for informer caches to sync
I1028 11:25:02.131017 1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
I1028 11:25:02.331761 1 shared_informer.go:320] Caches are synced for kube-network-policies
I1028 11:25:02.331784 1 metrics.go:61] Registering metrics
I1028 11:25:02.331940 1 controller.go:378] Syncing nftables rules
I1028 11:25:12.137929 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:25:12.138186 1 main.go:300] handling current node
I1028 11:25:22.129556 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:25:22.129658 1 main.go:300] handling current node
I1028 11:25:32.135605 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:25:32.135664 1 main.go:300] handling current node
I1028 11:25:42.137228 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:25:42.137502 1 main.go:300] handling current node
I1028 11:25:52.129926 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:25:52.129960 1 main.go:300] handling current node
I1028 11:26:02.130033 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:26:02.130073 1 main.go:300] handling current node
I1028 11:26:12.133426 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:26:12.133461 1 main.go:300] handling current node
I1028 11:26:22.131696 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:26:22.131732 1 main.go:300] handling current node
==> kindnet [42478c583a7df5e62ae3718bc78fb6be211abb490f9190466870859ec29e3bf3] <==
I1028 11:30:55.437974 1 main.go:300] handling current node
I1028 11:31:05.435979 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:31:05.436015 1 main.go:300] handling current node
I1028 11:31:15.428756 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:31:15.428861 1 main.go:300] handling current node
I1028 11:31:25.435910 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:31:25.435949 1 main.go:300] handling current node
I1028 11:31:35.437103 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:31:35.437153 1 main.go:300] handling current node
I1028 11:31:45.430845 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:31:45.430879 1 main.go:300] handling current node
I1028 11:31:55.435716 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:31:55.435752 1 main.go:300] handling current node
I1028 11:32:05.429615 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:32:05.429647 1 main.go:300] handling current node
I1028 11:32:15.429104 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:32:15.429145 1 main.go:300] handling current node
I1028 11:32:25.435442 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:32:25.435482 1 main.go:300] handling current node
I1028 11:32:35.436794 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:32:35.436829 1 main.go:300] handling current node
I1028 11:32:45.436724 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:32:45.436758 1 main.go:300] handling current node
I1028 11:32:55.431692 1 main.go:296] Handling node with IPs: map[192.168.76.2:{}]
I1028 11:32:55.431728 1 main.go:300] handling current node
==> kube-apiserver [ba54ab63823c2fcfe3e9bc95fca852e480e0d8fae4071a23e1fc38d3e74384cc] <==
I1028 11:24:40.365291 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1028 11:24:40.365416 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1028 11:24:40.389107 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1028 11:24:40.400225 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1028 11:24:40.400395 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1028 11:24:40.836649 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1028 11:24:40.891712 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1028 11:24:40.949077 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I1028 11:24:40.950426 1 controller.go:606] quota admission added evaluator for: endpoints
I1028 11:24:40.954844 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1028 11:24:42.002434 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I1028 11:24:42.480303 1 controller.go:606] quota admission added evaluator for: deployments.apps
I1028 11:24:42.561454 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I1028 11:24:50.915824 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I1028 11:24:58.389577 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I1028 11:24:58.724133 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I1028 11:25:04.776086 1 client.go:360] parsed scheme: "passthrough"
I1028 11:25:04.776130 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:25:04.776139 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 11:25:38.192691 1 client.go:360] parsed scheme: "passthrough"
I1028 11:25:38.192733 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:25:38.192742 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 11:26:16.152283 1 client.go:360] parsed scheme: "passthrough"
I1028 11:26:16.152346 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:26:16.152355 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [c02d779e69c4a6181f499ea147b62985bdd68ffb9d61fe7dab43115ca4318de6] <==
I1028 11:29:58.993279 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:29:58.993288 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1028 11:30:14.511668 1 handler_proxy.go:102] no RequestInfo found in the context
E1028 11:30:14.511767 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1028 11:30:14.511783 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1028 11:30:31.959866 1 client.go:360] parsed scheme: "passthrough"
I1028 11:30:31.959909 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:30:31.959919 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 11:31:04.256336 1 client.go:360] parsed scheme: "passthrough"
I1028 11:31:04.256378 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:31:04.256387 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 11:31:35.753953 1 client.go:360] parsed scheme: "passthrough"
I1028 11:31:35.753996 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:31:35.754030 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I1028 11:32:05.876255 1 client.go:360] parsed scheme: "passthrough"
I1028 11:32:05.876304 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:32:05.876313 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W1028 11:32:12.752803 1 handler_proxy.go:102] no RequestInfo found in the context
E1028 11:32:12.752996 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I1028 11:32:12.753018 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I1028 11:32:47.220657 1 client.go:360] parsed scheme: "passthrough"
I1028 11:32:47.220715 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I1028 11:32:47.220725 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-controller-manager [056d20453e357e86aa3e62b0dd7d945c40ec05cc5f462727941eac3714718438] <==
W1028 11:28:38.012362 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 11:29:01.947463 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 11:29:09.662751 1 request.go:655] Throttling request took 1.048494344s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1028 11:29:10.514128 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 11:29:32.449331 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 11:29:42.164901 1 request.go:655] Throttling request took 1.048392637s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
W1028 11:29:43.016425 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 11:30:02.952240 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 11:30:14.666885 1 request.go:655] Throttling request took 1.048369691s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
W1028 11:30:15.518322 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 11:30:33.454758 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 11:30:47.168889 1 request.go:655] Throttling request took 1.048192702s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
W1028 11:30:48.020737 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 11:31:03.956679 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 11:31:19.671181 1 request.go:655] Throttling request took 1.048408313s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
W1028 11:31:20.522538 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 11:31:34.458598 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 11:31:52.172936 1 request.go:655] Throttling request took 1.048338861s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1028 11:31:53.024452 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 11:32:04.960867 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 11:32:24.674832 1 request.go:655] Throttling request took 1.045105943s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1028 11:32:25.526122 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E1028 11:32:35.462934 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I1028 11:32:57.176513 1 request.go:655] Throttling request took 1.04820022s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W1028 11:32:58.029375 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [4937ca78533bbe1e9024be3e8c38035f4fb621e9cfcd8ef6fc974857b5f788d7] <==
I1028 11:24:58.537566 1 shared_informer.go:247] Caches are synced for taint
I1028 11:24:58.537665 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
I1028 11:24:58.537734 1 taint_manager.go:187] Starting NoExecuteTaintManager
W1028 11:24:58.537815 1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-674802. Assuming now as a timestamp.
I1028 11:24:58.537947 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1028 11:24:58.538971 1 event.go:291] "Event occurred" object="old-k8s-version-674802" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-674802 event: Registered Node old-k8s-version-674802 in Controller"
I1028 11:24:58.572667 1 range_allocator.go:373] Set node old-k8s-version-674802 PodCIDR to [10.244.0.0/24]
I1028 11:24:58.577986 1 shared_informer.go:247] Caches are synced for resource quota
I1028 11:24:58.581116 1 shared_informer.go:247] Caches are synced for stateful set
I1028 11:24:58.585263 1 shared_informer.go:247] Caches are synced for resource quota
I1028 11:24:58.630000 1 shared_informer.go:247] Caches are synced for daemon sets
I1028 11:24:58.765904 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-njzd8"
I1028 11:24:58.765945 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sdcls"
I1028 11:24:58.818726 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E1028 11:24:58.916440 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dfa45256-7428-4f74-ade3-ef655454ad7c", ResourceVersion:"256", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865711482, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400042ea80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400042eaa0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x400042eae0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000de7800), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400042e
b80), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400042ebc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400042ec80)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000768ea0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d4fa28), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40000f5730), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000767630)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d4fac8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
E1028 11:24:58.921942 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f2cb06fd-e71b-4014-8ed7-73d65dab8e3b", ResourceVersion:"413", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865711483, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241007-36f62932\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194c020), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194c040)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194c060), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194c080)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400194c0a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c0c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c0e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c100), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241007-36f62932", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400194c120)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400194c160)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40011b6540), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40020fa3b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004b6070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001d14000)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020fa400)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
E1028 11:24:58.936824 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"dfa45256-7428-4f74-ade3-ef655454ad7c", ResourceVersion:"414", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63865711482, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194c1e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194c200)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400194c220), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400194c240)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400194c260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4002013b80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400194c2a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400194c2e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40011b6a80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40020fa5b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004b60e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001d14008)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020fa608)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
I1028 11:24:59.019027 1 shared_informer.go:247] Caches are synced for garbage collector
I1028 11:24:59.023585 1 shared_informer.go:247] Caches are synced for garbage collector
I1028 11:24:59.023615 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1028 11:24:59.864032 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I1028 11:24:59.905691 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-8hwrv"
I1028 11:25:03.538182 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I1028 11:26:28.837836 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
E1028 11:26:29.032226 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [8d4b3dad3dd90f3ec833f354de4e8225bdaf07199d5245c988b1fdbc527c1015] <==
I1028 11:24:59.829233 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1028 11:24:59.829534 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1028 11:24:59.919895 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1028 11:24:59.920116 1 server_others.go:185] Using iptables Proxier.
I1028 11:24:59.921460 1 server.go:650] Version: v1.20.0
I1028 11:24:59.923318 1 config.go:315] Starting service config controller
I1028 11:24:59.923340 1 shared_informer.go:240] Waiting for caches to sync for service config
I1028 11:24:59.923421 1 config.go:224] Starting endpoint slice config controller
I1028 11:24:59.923434 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1028 11:25:00.023488 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1028 11:25:00.023806 1 shared_informer.go:247] Caches are synced for service config
==> kube-proxy [c0ed41137fbff35ffcb34df99174bf1cb9e6e2fda2154d421ab797a438e507bf] <==
I1028 11:27:14.626597 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I1028 11:27:14.626741 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W1028 11:27:14.653529 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I1028 11:27:14.653615 1 server_others.go:185] Using iptables Proxier.
I1028 11:27:14.653823 1 server.go:650] Version: v1.20.0
I1028 11:27:14.654747 1 config.go:315] Starting service config controller
I1028 11:27:14.654765 1 shared_informer.go:240] Waiting for caches to sync for service config
I1028 11:27:14.654796 1 config.go:224] Starting endpoint slice config controller
I1028 11:27:14.654799 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I1028 11:27:14.754902 1 shared_informer.go:247] Caches are synced for endpoint slice config
I1028 11:27:14.754963 1 shared_informer.go:247] Caches are synced for service config
==> kube-scheduler [31281b2de0e80c98175b18b80c8ece18d25bb88841661719ca7805a5cb795824] <==
I1028 11:27:07.754971 1 serving.go:331] Generated self-signed cert in-memory
W1028 11:27:11.674766 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1028 11:27:11.674812 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1028 11:27:11.674852 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1028 11:27:11.674859 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1028 11:27:11.775536 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1028 11:27:11.775661 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1028 11:27:11.775669 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1028 11:27:11.775690 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1028 11:27:11.924355 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1028 11:27:11.924667 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1028 11:27:11.924866 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1028 11:27:11.925052 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1028 11:27:11.925228 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1028 11:27:11.925393 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1028 11:27:11.925566 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1028 11:27:11.925866 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1028 11:27:11.925999 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1028 11:27:11.926113 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1028 11:27:11.926184 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I1028 11:27:11.976291 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [857580d96023ba113555b54f38493703fce44c36a25523c35c1fd07c51eee056] <==
I1028 11:24:35.070503 1 serving.go:331] Generated self-signed cert in-memory
W1028 11:24:39.542278 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1028 11:24:39.542504 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1028 11:24:39.542662 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W1028 11:24:39.542755 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1028 11:24:39.588029 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I1028 11:24:39.588146 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1028 11:24:39.588160 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1028 11:24:39.588184 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E1028 11:24:39.599682 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1028 11:24:39.600114 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1028 11:24:39.600433 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1028 11:24:39.600696 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1028 11:24:39.600806 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1028 11:24:39.600884 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1028 11:24:39.600944 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1028 11:24:39.601000 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1028 11:24:39.601061 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1028 11:24:39.601123 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1028 11:24:39.601340 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1028 11:24:39.601427 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1028 11:24:40.487956 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1028 11:24:40.519994 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1028 11:24:40.524227 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I1028 11:24:42.288418 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Oct 28 11:31:30 old-k8s-version-674802 kubelet[662]: E1028 11:31:30.396768 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:31:32 old-k8s-version-674802 kubelet[662]: E1028 11:31:32.397834 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: I1028 11:31:42.396459 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
Oct 28 11:31:42 old-k8s-version-674802 kubelet[662]: E1028 11:31:42.396795 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:31:47 old-k8s-version-674802 kubelet[662]: E1028 11:31:47.398090 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: I1028 11:31:56.396437 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
Oct 28 11:31:56 old-k8s-version-674802 kubelet[662]: E1028 11:31:56.396772 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:31:58 old-k8s-version-674802 kubelet[662]: E1028 11:31:58.397362 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: I1028 11:32:07.399489 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
Oct 28 11:32:07 old-k8s-version-674802 kubelet[662]: E1028 11:32:07.399844 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:13 old-k8s-version-674802 kubelet[662]: E1028 11:32:13.397227 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: I1028 11:32:19.397163 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
Oct 28 11:32:19 old-k8s-version-674802 kubelet[662]: E1028 11:32:19.398441 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:28 old-k8s-version-674802 kubelet[662]: E1028 11:32:28.398320 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: I1028 11:32:34.396501 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
Oct 28 11:32:34 old-k8s-version-674802 kubelet[662]: E1028 11:32:34.396869 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.418625 662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419047 662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419679 662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-bnmqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-lv8qx_kube-system(0813322
0-8dbe-4283-a64b-8a9383b25c93): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Oct 28 11:32:39 old-k8s-version-674802 kubelet[662]: E1028 11:32:39.419884 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Oct 28 11:32:47 old-k8s-version-674802 kubelet[662]: I1028 11:32:47.406280 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
Oct 28 11:32:47 old-k8s-version-674802 kubelet[662]: E1028 11:32:47.407313 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
Oct 28 11:32:51 old-k8s-version-674802 kubelet[662]: E1028 11:32:51.416287 662 pod_workers.go:191] Error syncing pod 08133220-8dbe-4283-a64b-8a9383b25c93 ("metrics-server-9975d5f86-lv8qx_kube-system(08133220-8dbe-4283-a64b-8a9383b25c93)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Oct 28 11:32:58 old-k8s-version-674802 kubelet[662]: I1028 11:32:58.396479 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: be9f8802d8916ebf81427a08c9b5699aeaa6098d28b576618aa5899e3f59e09b
Oct 28 11:32:58 old-k8s-version-674802 kubelet[662]: E1028 11:32:58.396862 662 pod_workers.go:191] Error syncing pod 9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e ("dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-8ft4v_kubernetes-dashboard(9ab7365e-63bb-4c9a-b14a-c5b5e3bb526e)"
==> kubernetes-dashboard [9666309986efcb4076982c7df1d9e0c9f905cbcee4a0e3d7a1dcd2ab0132348b] <==
2024/10/28 11:27:39 Using namespace: kubernetes-dashboard
2024/10/28 11:27:39 Using in-cluster config to connect to apiserver
2024/10/28 11:27:39 Using secret token for csrf signing
2024/10/28 11:27:39 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/10/28 11:27:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/10/28 11:27:39 Successful initial request to the apiserver, version: v1.20.0
2024/10/28 11:27:39 Generating JWE encryption key
2024/10/28 11:27:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/10/28 11:27:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/10/28 11:27:40 Initializing JWE encryption key from synchronized object
2024/10/28 11:27:40 Creating in-cluster Sidecar client
2024/10/28 11:27:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:27:40 Serving insecurely on HTTP port: 9090
2024/10/28 11:28:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:28:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:29:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:29:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:30:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:30:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:31:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:31:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:32:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:32:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/10/28 11:27:39 Starting overwatch
==> storage-provisioner [af354fdce961d0c931e3e7b6826943560aa505683dbea93dedbce9a94105e0f8] <==
I1028 11:27:58.535210 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I1028 11:27:58.551888 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I1028 11:27:58.552059 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I1028 11:28:16.046242 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I1028 11:28:16.046648 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-674802_477a9f62-9726-4555-8c7e-2c9d782fd3ee!
I1028 11:28:16.051884 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6baa8baf-9963-4ef5-aec2-d198238af88a", APIVersion:"v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-674802_477a9f62-9726-4555-8c7e-2c9d782fd3ee became leader
I1028 11:28:16.147164 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-674802_477a9f62-9726-4555-8c7e-2c9d782fd3ee!
==> storage-provisioner [e4aa22206b37d13d9665658eeeb808da6cd7c1789a0887c07a5f6460c9dd38f5] <==
I1028 11:27:14.469698 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1028 11:27:44.472048 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-674802 -n old-k8s-version-674802
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-674802 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-lv8qx
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-674802 describe pod metrics-server-9975d5f86-lv8qx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-674802 describe pod metrics-server-9975d5f86-lv8qx: exit status 1 (124.228069ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-lv8qx" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-674802 describe pod metrics-server-9975d5f86-lv8qx: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (378.69s)