=== RUN TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run: out/minikube-linux-arm64 start -p old-k8s-version-706521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-706521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m16.284608874s)
-- stdout --
* [old-k8s-version-706521] minikube v1.33.1 on Ubuntu 20.04 (arm64)
- MINIKUBE_LOCATION=19283
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/19283-709197/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-709197/.minikube
- MINIKUBE_BIN=out/minikube-linux-arm64
- MINIKUBE_FORCE_SYSTEMD=
* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
* Using the docker driver based on existing profile
* Starting "old-k8s-version-706521" primary control-plane node in "old-k8s-version-706521" cluster
* Pulling base image v0.0.44-1721146479-19264 ...
* Restarting existing docker container for "old-k8s-version-706521" ...
* Preparing Kubernetes v1.20.0 on containerd 1.7.18 ...
* Verifying Kubernetes components...
- Using image registry.k8s.io/echoserver:1.4
- Using image docker.io/kubernetesui/dashboard:v2.7.0
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
- Using image fake.domain/registry.k8s.io/echoserver:1.4
* Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-706521 addons enable metrics-server
* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
-- /stdout --
** stderr **
I0717 20:08:03.619777 921861 out.go:291] Setting OutFile to fd 1 ...
I0717 20:08:03.619938 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:08:03.619944 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:08:03.619950 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:08:03.620248 921861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-709197/.minikube/bin
I0717 20:08:03.620781 921861 out.go:298] Setting JSON to false
I0717 20:08:03.622093 921861 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":13832,"bootTime":1721233052,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0717 20:08:03.622191 921861 start.go:139] virtualization:
I0717 20:08:03.624892 921861 out.go:177] * [old-k8s-version-706521] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0717 20:08:03.628255 921861 notify.go:220] Checking for updates...
I0717 20:08:03.630408 921861 out.go:177] - MINIKUBE_LOCATION=19283
I0717 20:08:03.632379 921861 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 20:08:03.634529 921861 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19283-709197/kubeconfig
I0717 20:08:03.636586 921861 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-709197/.minikube
I0717 20:08:03.638316 921861 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0717 20:08:03.640044 921861 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 20:08:03.642479 921861 config.go:182] Loaded profile config "old-k8s-version-706521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0717 20:08:03.645216 921861 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
I0717 20:08:03.647080 921861 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 20:08:03.678694 921861 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
I0717 20:08:03.678869 921861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0717 20:08:03.761711 921861 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2024-07-17 20:08:03.749607726 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
I0717 20:08:03.761978 921861 docker.go:307] overlay module found
I0717 20:08:03.764609 921861 out.go:177] * Using the docker driver based on existing profile
I0717 20:08:03.766882 921861 start.go:297] selected driver: docker
I0717 20:08:03.766904 921861 start.go:901] validating driver "docker" against &{Name:old-k8s-version-706521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-706521 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:08:03.767030 921861 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 20:08:03.767643 921861 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0717 20:08:03.865507 921861 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2024-07-17 20:08:03.854226965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
I0717 20:08:03.865985 921861 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 20:08:03.866016 921861 cni.go:84] Creating CNI manager for ""
I0717 20:08:03.866025 921861 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:08:03.866070 921861 start.go:340] cluster config:
{Name:old-k8s-version-706521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-706521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:08:03.868003 921861 out.go:177] * Starting "old-k8s-version-706521" primary control-plane node in "old-k8s-version-706521" cluster
I0717 20:08:03.869881 921861 cache.go:121] Beginning downloading kic base image for docker with containerd
I0717 20:08:03.871946 921861 out.go:177] * Pulling base image v0.0.44-1721146479-19264 ...
I0717 20:08:03.873887 921861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0717 20:08:03.873956 921861 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-709197/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
I0717 20:08:03.873971 921861 cache.go:56] Caching tarball of preloaded images
I0717 20:08:03.874061 921861 preload.go:172] Found /home/jenkins/minikube-integration/19283-709197/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0717 20:08:03.874076 921861 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
I0717 20:08:03.874192 921861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/config.json ...
I0717 20:08:03.874434 921861 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
W0717 20:08:03.906990 921861 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e is of wrong architecture
I0717 20:08:03.907014 921861 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
I0717 20:08:03.907097 921861 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
I0717 20:08:03.907121 921861 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
I0717 20:08:03.907129 921861 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
I0717 20:08:03.907138 921861 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
I0717 20:08:03.907150 921861 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from local cache
I0717 20:08:04.034174 921861 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from cached tarball
I0717 20:08:04.034219 921861 cache.go:194] Successfully downloaded all kic artifacts
I0717 20:08:04.034277 921861 start.go:360] acquireMachinesLock for old-k8s-version-706521: {Name:mk7a5a89dc42262e5dbd2af5317687febbf841e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 20:08:04.034389 921861 start.go:364] duration metric: took 58.38µs to acquireMachinesLock for "old-k8s-version-706521"
I0717 20:08:04.034416 921861 start.go:96] Skipping create...Using existing machine configuration
I0717 20:08:04.034425 921861 fix.go:54] fixHost starting:
I0717 20:08:04.034724 921861 cli_runner.go:164] Run: docker container inspect old-k8s-version-706521 --format={{.State.Status}}
I0717 20:08:04.057416 921861 fix.go:112] recreateIfNeeded on old-k8s-version-706521: state=Stopped err=<nil>
W0717 20:08:04.057456 921861 fix.go:138] unexpected machine state, will restart: <nil>
I0717 20:08:04.060661 921861 out.go:177] * Restarting existing docker container for "old-k8s-version-706521" ...
I0717 20:08:04.062307 921861 cli_runner.go:164] Run: docker start old-k8s-version-706521
I0717 20:08:04.470071 921861 cli_runner.go:164] Run: docker container inspect old-k8s-version-706521 --format={{.State.Status}}
I0717 20:08:04.503143 921861 kic.go:430] container "old-k8s-version-706521" state is running.
I0717 20:08:04.503543 921861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706521
I0717 20:08:04.526927 921861 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/config.json ...
I0717 20:08:04.527185 921861 machine.go:94] provisionDockerMachine start ...
I0717 20:08:04.527290 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:04.546441 921861 main.go:141] libmachine: Using SSH client type: native
I0717 20:08:04.546716 921861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0717 20:08:04.546734 921861 main.go:141] libmachine: About to run SSH command:
hostname
I0717 20:08:04.547501 921861 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60206->127.0.0.1:33824: read: connection reset by peer
I0717 20:08:07.690619 921861 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706521
I0717 20:08:07.690691 921861 ubuntu.go:169] provisioning hostname "old-k8s-version-706521"
I0717 20:08:07.690809 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:07.712033 921861 main.go:141] libmachine: Using SSH client type: native
I0717 20:08:07.712279 921861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0717 20:08:07.712290 921861 main.go:141] libmachine: About to run SSH command:
sudo hostname old-k8s-version-706521 && echo "old-k8s-version-706521" | sudo tee /etc/hostname
I0717 20:08:07.865701 921861 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-706521
I0717 20:08:07.865825 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:07.883699 921861 main.go:141] libmachine: Using SSH client type: native
I0717 20:08:07.884053 921861 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33824 <nil> <nil>}
I0717 20:08:07.884084 921861 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sold-k8s-version-706521' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-706521/g' /etc/hosts;
else
echo '127.0.1.1 old-k8s-version-706521' | sudo tee -a /etc/hosts;
fi
fi
I0717 20:08:08.023678 921861 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 20:08:08.023708 921861 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19283-709197/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-709197/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-709197/.minikube}
I0717 20:08:08.023734 921861 ubuntu.go:177] setting up certificates
I0717 20:08:08.023746 921861 provision.go:84] configureAuth start
I0717 20:08:08.023808 921861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706521
I0717 20:08:08.042350 921861 provision.go:143] copyHostCerts
I0717 20:08:08.042425 921861 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-709197/.minikube/ca.pem, removing ...
I0717 20:08:08.042440 921861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-709197/.minikube/ca.pem
I0717 20:08:08.042541 921861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-709197/.minikube/ca.pem (1078 bytes)
I0717 20:08:08.042649 921861 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-709197/.minikube/cert.pem, removing ...
I0717 20:08:08.042661 921861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-709197/.minikube/cert.pem
I0717 20:08:08.042690 921861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-709197/.minikube/cert.pem (1123 bytes)
I0717 20:08:08.042754 921861 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-709197/.minikube/key.pem, removing ...
I0717 20:08:08.042763 921861 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-709197/.minikube/key.pem
I0717 20:08:08.042973 921861 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-709197/.minikube/key.pem (1675 bytes)
I0717 20:08:08.043064 921861 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-709197/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-706521 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-706521]
I0717 20:08:08.935759 921861 provision.go:177] copyRemoteCerts
I0717 20:08:08.935857 921861 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 20:08:08.935918 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:08.952913 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:09.051954 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0717 20:08:09.080634 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
I0717 20:08:09.109054 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0717 20:08:09.136347 921861 provision.go:87] duration metric: took 1.112587925s to configureAuth
I0717 20:08:09.136418 921861 ubuntu.go:193] setting minikube options for container-runtime
I0717 20:08:09.136636 921861 config.go:182] Loaded profile config "old-k8s-version-706521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0717 20:08:09.136651 921861 machine.go:97] duration metric: took 4.609451513s to provisionDockerMachine
I0717 20:08:09.136660 921861 start.go:293] postStartSetup for "old-k8s-version-706521" (driver="docker")
I0717 20:08:09.136672 921861 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 20:08:09.136723 921861 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 20:08:09.136770 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:09.164236 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:09.269031 921861 ssh_runner.go:195] Run: cat /etc/os-release
I0717 20:08:09.272351 921861 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0717 20:08:09.272387 921861 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0717 20:08:09.272398 921861 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0717 20:08:09.272405 921861 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0717 20:08:09.272417 921861 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-709197/.minikube/addons for local assets ...
I0717 20:08:09.272480 921861 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-709197/.minikube/files for local assets ...
I0717 20:08:09.272579 921861 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-709197/.minikube/files/etc/ssl/certs/7145882.pem -> 7145882.pem in /etc/ssl/certs
I0717 20:08:09.272687 921861 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 20:08:09.281567 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/files/etc/ssl/certs/7145882.pem --> /etc/ssl/certs/7145882.pem (1708 bytes)
I0717 20:08:09.314645 921861 start.go:296] duration metric: took 177.969534ms for postStartSetup
I0717 20:08:09.314741 921861 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0717 20:08:09.314794 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:09.360305 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:09.460467 921861 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0717 20:08:09.465191 921861 fix.go:56] duration metric: took 5.430756555s for fixHost
I0717 20:08:09.465216 921861 start.go:83] releasing machines lock for "old-k8s-version-706521", held for 5.430812409s
I0717 20:08:09.465296 921861 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-706521
I0717 20:08:09.486513 921861 ssh_runner.go:195] Run: cat /version.json
I0717 20:08:09.486556 921861 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 20:08:09.486627 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:09.486694 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:09.503547 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:09.542212 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:09.787602 921861 ssh_runner.go:195] Run: systemctl --version
I0717 20:08:09.801800 921861 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0717 20:08:09.811046 921861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0717 20:08:09.842228 921861 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0717 20:08:09.842313 921861 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 20:08:09.856135 921861 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
I0717 20:08:09.856159 921861 start.go:495] detecting cgroup driver to use...
I0717 20:08:09.856193 921861 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0717 20:08:09.856241 921861 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 20:08:09.895906 921861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 20:08:09.914116 921861 docker.go:217] disabling cri-docker service (if available) ...
I0717 20:08:09.914191 921861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0717 20:08:09.930771 921861 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0717 20:08:09.957357 921861 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0717 20:08:10.098395 921861 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0717 20:08:10.257591 921861 docker.go:233] disabling docker service ...
I0717 20:08:10.257654 921861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0717 20:08:10.276019 921861 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0717 20:08:10.295208 921861 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0717 20:08:10.442438 921861 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0717 20:08:10.591098 921861 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0717 20:08:10.607014 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 20:08:10.631979 921861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
I0717 20:08:10.649127 921861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 20:08:10.662692 921861 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 20:08:10.662766 921861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 20:08:10.677192 921861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 20:08:10.690921 921861 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 20:08:10.705999 921861 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 20:08:10.715934 921861 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 20:08:10.730661 921861 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 20:08:10.740516 921861 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 20:08:10.751719 921861 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 20:08:10.760493 921861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:08:10.895944 921861 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 20:08:11.241312 921861 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0717 20:08:11.241380 921861 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 20:08:11.250202 921861 start.go:563] Will wait 60s for crictl version
I0717 20:08:11.250277 921861 ssh_runner.go:195] Run: which crictl
I0717 20:08:11.255608 921861 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 20:08:11.325605 921861 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.18
RuntimeApiVersion: v1
I0717 20:08:11.325690 921861 ssh_runner.go:195] Run: containerd --version
I0717 20:08:11.377388 921861 ssh_runner.go:195] Run: containerd --version
I0717 20:08:11.413271 921861 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.18 ...
I0717 20:08:11.416026 921861 cli_runner.go:164] Run: docker network inspect old-k8s-version-706521 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0717 20:08:11.438763 921861 ssh_runner.go:195] Run: grep 192.168.76.1 host.minikube.internal$ /etc/hosts
I0717 20:08:11.442690 921861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 20:08:11.457451 921861 kubeadm.go:883] updating cluster {Name:old-k8s-version-706521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-706521 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0717 20:08:11.457570 921861 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0717 20:08:11.457630 921861 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 20:08:11.540571 921861 containerd.go:627] all images are preloaded for containerd runtime.
I0717 20:08:11.540592 921861 containerd.go:534] Images already preloaded, skipping extraction
I0717 20:08:11.540650 921861 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 20:08:11.617378 921861 containerd.go:627] all images are preloaded for containerd runtime.
I0717 20:08:11.617474 921861 cache_images.go:84] Images are preloaded, skipping loading
I0717 20:08:11.617499 921861 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
I0717 20:08:11.617662 921861 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-706521 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
[Install]
config:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-706521 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 20:08:11.617749 921861 ssh_runner.go:195] Run: sudo crictl info
I0717 20:08:11.670169 921861 cni.go:84] Creating CNI manager for ""
I0717 20:08:11.670188 921861 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:08:11.670197 921861 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0717 20:08:11.670215 921861 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-706521 NodeName:old-k8s-version-706521 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
I0717 20:08:11.670352 921861 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.76.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /run/containerd/containerd.sock
name: "old-k8s-version-706521"
kubeletExtraArgs:
node-ip: 192.168.76.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.0
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%"
nodefs.inodesFree: "0%"
imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0717 20:08:11.670417 921861 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
I0717 20:08:11.680914 921861 binaries.go:44] Found k8s binaries, skipping transfer
I0717 20:08:11.681023 921861 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0717 20:08:11.690604 921861 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
I0717 20:08:11.716849 921861 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0717 20:08:11.747371 921861 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
I0717 20:08:11.769422 921861 ssh_runner.go:195] Run: grep 192.168.76.2 control-plane.minikube.internal$ /etc/hosts
I0717 20:08:11.775271 921861 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 20:08:11.790814 921861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:08:11.934933 921861 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 20:08:11.957002 921861 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521 for IP: 192.168.76.2
I0717 20:08:11.957021 921861 certs.go:194] generating shared ca certs ...
I0717 20:08:11.957037 921861 certs.go:226] acquiring lock for ca certs: {Name:mkfe19deb7be0c5238e120e88073153330750974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:08:11.957166 921861 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-709197/.minikube/ca.key
I0717 20:08:11.957210 921861 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-709197/.minikube/proxy-client-ca.key
I0717 20:08:11.957216 921861 certs.go:256] generating profile certs ...
I0717 20:08:11.957299 921861 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/client.key
I0717 20:08:11.957361 921861 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/apiserver.key.ac310fd8
I0717 20:08:11.957399 921861 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/proxy-client.key
I0717 20:08:11.957506 921861 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/714588.pem (1338 bytes)
W0717 20:08:11.957534 921861 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-709197/.minikube/certs/714588_empty.pem, impossibly tiny 0 bytes
I0717 20:08:11.957542 921861 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca-key.pem (1679 bytes)
I0717 20:08:11.957566 921861 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem (1078 bytes)
I0717 20:08:11.957588 921861 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/cert.pem (1123 bytes)
I0717 20:08:11.957608 921861 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/key.pem (1675 bytes)
I0717 20:08:11.957655 921861 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/files/etc/ssl/certs/7145882.pem (1708 bytes)
I0717 20:08:11.958285 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 20:08:12.038962 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0717 20:08:12.112473 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 20:08:12.180472 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0717 20:08:12.230110 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
I0717 20:08:12.263043 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0717 20:08:12.293703 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 20:08:12.338530 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/old-k8s-version-706521/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0717 20:08:12.372452 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/certs/714588.pem --> /usr/share/ca-certificates/714588.pem (1338 bytes)
I0717 20:08:12.402636 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/files/etc/ssl/certs/7145882.pem --> /usr/share/ca-certificates/7145882.pem (1708 bytes)
I0717 20:08:12.439232 921861 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 20:08:12.472061 921861 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0717 20:08:12.500200 921861 ssh_runner.go:195] Run: openssl version
I0717 20:08:12.508340 921861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/714588.pem && ln -fs /usr/share/ca-certificates/714588.pem /etc/ssl/certs/714588.pem"
I0717 20:08:12.521288 921861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/714588.pem
I0717 20:08:12.526822 921861 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:26 /usr/share/ca-certificates/714588.pem
I0717 20:08:12.526905 921861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/714588.pem
I0717 20:08:12.534744 921861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/714588.pem /etc/ssl/certs/51391683.0"
I0717 20:08:12.557255 921861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7145882.pem && ln -fs /usr/share/ca-certificates/7145882.pem /etc/ssl/certs/7145882.pem"
I0717 20:08:12.572127 921861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7145882.pem
I0717 20:08:12.577168 921861 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:26 /usr/share/ca-certificates/7145882.pem
I0717 20:08:12.577250 921861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7145882.pem
I0717 20:08:12.587073 921861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7145882.pem /etc/ssl/certs/3ec20f2e.0"
I0717 20:08:12.596613 921861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 20:08:12.606954 921861 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 20:08:12.611401 921861 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:17 /usr/share/ca-certificates/minikubeCA.pem
I0717 20:08:12.611514 921861 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 20:08:12.620033 921861 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 20:08:12.629226 921861 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 20:08:12.633403 921861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0717 20:08:12.642759 921861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0717 20:08:12.652046 921861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0717 20:08:12.660965 921861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0717 20:08:12.668761 921861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0717 20:08:12.676703 921861 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0717 20:08:12.685475 921861 kubeadm.go:392] StartCluster: {Name:old-k8s-version-706521 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-706521 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:08:12.685634 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0717 20:08:12.685729 921861 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0717 20:08:12.768701 921861 cri.go:89] found id: "c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:08:12.768784 921861 cri.go:89] found id: "c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:08:12.768804 921861 cri.go:89] found id: "b4df075d5083144f33313e40c45dfac96814169e4ef9d794af9d677c63b69ca1"
I0717 20:08:12.768825 921861 cri.go:89] found id: "6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:08:12.768845 921861 cri.go:89] found id: "ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:08:12.768878 921861 cri.go:89] found id: "ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:08:12.768896 921861 cri.go:89] found id: "f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:08:12.768916 921861 cri.go:89] found id: "9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:08:12.768946 921861 cri.go:89] found id: ""
I0717 20:08:12.769028 921861 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
I0717 20:08:12.792104 921861 cri.go:116] JSON = null
W0717 20:08:12.792206 921861 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
I0717 20:08:12.792295 921861 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0717 20:08:12.808342 921861 kubeadm.go:408] found existing configuration files, will attempt cluster restart
I0717 20:08:12.808419 921861 kubeadm.go:593] restartPrimaryControlPlane start ...
I0717 20:08:12.808502 921861 ssh_runner.go:195] Run: sudo test -d /data/minikube
I0717 20:08:12.824405 921861 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:
stderr:
I0717 20:08:12.825313 921861 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-706521" does not appear in /home/jenkins/minikube-integration/19283-709197/kubeconfig
I0717 20:08:12.825861 921861 kubeconfig.go:62] /home/jenkins/minikube-integration/19283-709197/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-706521" cluster setting kubeconfig missing "old-k8s-version-706521" context setting]
I0717 20:08:12.828619 921861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/kubeconfig: {Name:mkfd66af4e41045365ddf719d413d3dd20635b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:08:12.830214 921861 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0717 20:08:12.847560 921861 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
I0717 20:08:12.847596 921861 kubeadm.go:597] duration metric: took 39.156651ms to restartPrimaryControlPlane
I0717 20:08:12.847607 921861 kubeadm.go:394] duration metric: took 162.154887ms to StartCluster
I0717 20:08:12.847622 921861 settings.go:142] acquiring lock: {Name:mk3c7bd8285e1f7f29e104185adf6ca3fc396c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:08:12.847691 921861 settings.go:150] Updating kubeconfig: /home/jenkins/minikube-integration/19283-709197/kubeconfig
I0717 20:08:12.849438 921861 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/kubeconfig: {Name:mkfd66af4e41045365ddf719d413d3dd20635b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:08:12.849875 921861 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 20:08:12.850313 921861 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
I0717 20:08:12.850447 921861 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-706521"
I0717 20:08:12.850474 921861 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-706521"
W0717 20:08:12.850485 921861 addons.go:243] addon storage-provisioner should already be in state true
I0717 20:08:12.850516 921861 host.go:66] Checking if "old-k8s-version-706521" exists ...
I0717 20:08:12.851322 921861 cli_runner.go:164] Run: docker container inspect old-k8s-version-706521 --format={{.State.Status}}
I0717 20:08:12.851629 921861 config.go:182] Loaded profile config "old-k8s-version-706521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0717 20:08:12.851715 921861 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-706521"
I0717 20:08:12.851753 921861 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-706521"
I0717 20:08:12.852062 921861 cli_runner.go:164] Run: docker container inspect old-k8s-version-706521 --format={{.State.Status}}
I0717 20:08:12.852470 921861 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-706521"
I0717 20:08:12.852506 921861 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-706521"
W0717 20:08:12.852514 921861 addons.go:243] addon metrics-server should already be in state true
I0717 20:08:12.852553 921861 host.go:66] Checking if "old-k8s-version-706521" exists ...
I0717 20:08:12.853162 921861 cli_runner.go:164] Run: docker container inspect old-k8s-version-706521 --format={{.State.Status}}
I0717 20:08:12.853321 921861 addons.go:69] Setting dashboard=true in profile "old-k8s-version-706521"
I0717 20:08:12.853356 921861 addons.go:234] Setting addon dashboard=true in "old-k8s-version-706521"
W0717 20:08:12.853401 921861 addons.go:243] addon dashboard should already be in state true
I0717 20:08:12.853439 921861 host.go:66] Checking if "old-k8s-version-706521" exists ...
I0717 20:08:12.853879 921861 cli_runner.go:164] Run: docker container inspect old-k8s-version-706521 --format={{.State.Status}}
I0717 20:08:12.854444 921861 out.go:177] * Verifying Kubernetes components...
I0717 20:08:12.858393 921861 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:08:12.897469 921861 out.go:177] - Using image registry.k8s.io/echoserver:1.4
I0717 20:08:12.900281 921861 out.go:177] - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0717 20:08:12.906842 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0717 20:08:12.906880 921861 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0717 20:08:12.906953 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:12.931611 921861 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-706521"
W0717 20:08:12.931635 921861 addons.go:243] addon default-storageclass should already be in state true
I0717 20:08:12.931663 921861 host.go:66] Checking if "old-k8s-version-706521" exists ...
I0717 20:08:12.932090 921861 cli_runner.go:164] Run: docker container inspect old-k8s-version-706521 --format={{.State.Status}}
I0717 20:08:12.956784 921861 out.go:177] - Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0717 20:08:12.959010 921861 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:08:12.959036 921861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0717 20:08:12.959118 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:12.970410 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:12.987638 921861 out.go:177] - Using image fake.domain/registry.k8s.io/echoserver:1.4
I0717 20:08:12.989027 921861 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
I0717 20:08:12.989046 921861 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I0717 20:08:12.989114 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:12.989678 921861 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
I0717 20:08:12.989702 921861 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
I0717 20:08:12.989748 921861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-706521
I0717 20:08:13.031496 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:13.042588 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:13.054055 921861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33824 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/old-k8s-version-706521/id_rsa Username:docker}
I0717 20:08:13.118645 921861 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 20:08:13.199895 921861 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-706521" to be "Ready" ...
I0717 20:08:13.233378 921861 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
I0717 20:08:13.233404 921861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
I0717 20:08:13.236821 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0717 20:08:13.236843 921861 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0717 20:08:13.269260 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0717 20:08:13.269286 921861 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0717 20:08:13.287297 921861 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
I0717 20:08:13.287327 921861 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
I0717 20:08:13.302002 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0717 20:08:13.302028 921861 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0717 20:08:13.330027 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:08:13.353091 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0717 20:08:13.353118 921861 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
I0717 20:08:13.369851 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I0717 20:08:13.386621 921861 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
I0717 20:08:13.386648 921861 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
I0717 20:08:13.429905 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0717 20:08:13.429931 921861 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0717 20:08:13.484384 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0717 20:08:13.486233 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0717 20:08:13.486256 921861 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0717 20:08:13.551672 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0717 20:08:13.551698 921861 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0717 20:08:13.664728 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0717 20:08:13.664760 921861 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
W0717 20:08:13.742616 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:13.742651 921861 retry.go:31] will retry after 205.264224ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:13.767799 921861 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0717 20:08:13.767825 921861 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
W0717 20:08:13.782462 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:13.782526 921861 retry.go:31] will retry after 269.301825ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:08:13.782571 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:13.782586 921861 retry.go:31] will retry after 274.731885ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:13.801620 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:08:13.911623 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:13.911659 921861 retry.go:31] will retry after 348.242504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:13.948970 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:08:14.045199 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.045232 921861 retry.go:31] will retry after 300.047712ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.052518 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0717 20:08:14.057916 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:08:14.254480 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.254556 921861 retry.go:31] will retry after 243.58397ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.260908 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:08:14.283427 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.283511 921861 retry.go:31] will retry after 340.582058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.345704 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:08:14.400085 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.400184 921861 retry.go:31] will retry after 545.755943ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:08:14.479332 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.479415 921861 retry.go:31] will retry after 703.738729ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.498676 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:08:14.595298 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.595338 921861 retry.go:31] will retry after 387.51305ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.624503 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:08:14.732449 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.732486 921861 retry.go:31] will retry after 335.187269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:14.946955 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0717 20:08:14.983320 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:08:15.044275 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.044321 921861 retry.go:31] will retry after 673.014464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.068637 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:08:15.106333 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.106369 921861 retry.go:31] will retry after 573.357731ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:08:15.165899 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.165941 921861 retry.go:31] will retry after 817.865944ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.184031 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:08:15.200716 921861 node_ready.go:53] error getting node "old-k8s-version-706521": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-706521": dial tcp 192.168.76.2:8443: connect: connection refused
W0717 20:08:15.259575 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.259611 921861 retry.go:31] will retry after 1.152087438s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.680133 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0717 20:08:15.718049 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:08:15.773579 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.773628 921861 retry.go:31] will retry after 1.212383249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:08:15.849319 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.849404 921861 retry.go:31] will retry after 1.025982301s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:15.984743 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:08:16.066695 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:16.066730 921861 retry.go:31] will retry after 774.851188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:16.412539 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:08:16.491560 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:16.491643 921861 retry.go:31] will retry after 1.741530751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:16.842525 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0717 20:08:16.876054 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:08:16.949931 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:16.949965 921861 retry.go:31] will retry after 990.507514ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:16.986241 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:08:17.032490 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:17.032571 921861 retry.go:31] will retry after 1.709627913s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:08:17.094286 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:17.094332 921861 retry.go:31] will retry after 2.140577625s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:17.200964 921861 node_ready.go:53] error getting node "old-k8s-version-706521": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-706521": dial tcp 192.168.76.2:8443: connect: connection refused
I0717 20:08:17.941352 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:08:18.020679 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:18.020731 921861 retry.go:31] will retry after 2.775732814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:18.233562 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:08:18.311182 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:18.311213 921861 retry.go:31] will retry after 2.010516916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:18.743341 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
W0717 20:08:18.832488 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:18.832521 921861 retry.go:31] will retry after 1.929132619s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:19.201423 921861 node_ready.go:53] error getting node "old-k8s-version-706521": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-706521": dial tcp 192.168.76.2:8443: connect: connection refused
I0717 20:08:19.235616 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
W0717 20:08:19.316943 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:19.316983 921861 retry.go:31] will retry after 2.034250002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:20.322664 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
W0717 20:08:20.400172 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:20.400209 921861 retry.go:31] will retry after 3.93697824s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:20.762214 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0717 20:08:20.796609 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
W0717 20:08:20.851981 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:20.852013 921861 retry.go:31] will retry after 2.185079028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
W0717 20:08:20.885895 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:20.885932 921861 retry.go:31] will retry after 5.513706375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:21.351883 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0717 20:08:21.700758 921861 node_ready.go:53] error getting node "old-k8s-version-706521": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-706521": dial tcp 192.168.76.2:8443: connect: connection refused
W0717 20:08:21.711574 921861 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:21.711604 921861 retry.go:31] will retry after 4.727835858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
I0717 20:08:23.037968 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0717 20:08:24.337768 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
I0717 20:08:26.400691 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
I0717 20:08:26.440020 921861 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
I0717 20:08:29.118109 921861 node_ready.go:49] node "old-k8s-version-706521" has status "Ready":"True"
I0717 20:08:29.118141 921861 node_ready.go:38] duration metric: took 15.918213633s for node "old-k8s-version-706521" to be "Ready" ...
I0717 20:08:29.118157 921861 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0717 20:08:29.301252 921861 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-92n49" in "kube-system" namespace to be "Ready" ...
I0717 20:08:29.354652 921861 pod_ready.go:92] pod "coredns-74ff55c5b-92n49" in "kube-system" namespace has status "Ready":"True"
I0717 20:08:29.354680 921861 pod_ready.go:81] duration metric: took 53.392189ms for pod "coredns-74ff55c5b-92n49" in "kube-system" namespace to be "Ready" ...
I0717 20:08:29.354692 921861 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-706521" in "kube-system" namespace to be "Ready" ...
I0717 20:08:29.437425 921861 pod_ready.go:92] pod "etcd-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"True"
I0717 20:08:29.437454 921861 pod_ready.go:81] duration metric: took 82.753232ms for pod "etcd-old-k8s-version-706521" in "kube-system" namespace to be "Ready" ...
I0717 20:08:29.437470 921861 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-706521" in "kube-system" namespace to be "Ready" ...
I0717 20:08:29.483848 921861 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"True"
I0717 20:08:29.483884 921861 pod_ready.go:81] duration metric: took 46.405457ms for pod "kube-apiserver-old-k8s-version-706521" in "kube-system" namespace to be "Ready" ...
I0717 20:08:29.483906 921861 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace to be "Ready" ...
I0717 20:08:31.343843 921861 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.305828899s)
I0717 20:08:31.344080 921861 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.006285352s)
I0717 20:08:31.344170 921861 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.943449904s)
I0717 20:08:31.344193 921861 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-706521"
I0717 20:08:31.344220 921861 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.904180005s)
I0717 20:08:31.347010 921861 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
minikube -p old-k8s-version-706521 addons enable metrics-server
I0717 20:08:31.352325 921861 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
I0717 20:08:31.354991 921861 addons.go:510] duration metric: took 18.504672632s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
I0717 20:08:31.502940 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:33.995156 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:36.490718 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:39.026672 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:41.491571 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:43.991731 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:46.491731 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:48.993568 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:51.491203 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:53.991892 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:56.490924 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:08:59.001145 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:01.490962 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:03.990365 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:05.990645 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:08.490938 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:10.991087 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:13.491362 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:15.990967 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:18.015718 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:20.491638 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:22.991573 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:24.994517 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:27.491866 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:29.990168 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:32.489936 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:34.492130 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:36.493060 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:38.990676 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:40.991391 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:42.998265 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:45.496277 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:47.989856 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:49.991232 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:52.496679 921861 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:54.993020 921861 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"True"
I0717 20:09:54.993050 921861 pod_ready.go:81] duration metric: took 1m25.509134134s for pod "kube-controller-manager-old-k8s-version-706521" in "kube-system" namespace to be "Ready" ...
I0717 20:09:54.993062 921861 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wl7dv" in "kube-system" namespace to be "Ready" ...
I0717 20:09:55.003480 921861 pod_ready.go:92] pod "kube-proxy-wl7dv" in "kube-system" namespace has status "Ready":"True"
I0717 20:09:55.003505 921861 pod_ready.go:81] duration metric: took 10.435123ms for pod "kube-proxy-wl7dv" in "kube-system" namespace to be "Ready" ...
I0717 20:09:55.003518 921861 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-706521" in "kube-system" namespace to be "Ready" ...
I0717 20:09:55.011525 921861 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-706521" in "kube-system" namespace has status "Ready":"True"
I0717 20:09:55.011564 921861 pod_ready.go:81] duration metric: took 8.036854ms for pod "kube-scheduler-old-k8s-version-706521" in "kube-system" namespace to be "Ready" ...
I0717 20:09:55.011579 921861 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace to be "Ready" ...
I0717 20:09:57.019153 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:09:59.517750 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:01.524804 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:04.018706 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:06.518502 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:08.519195 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:11.018483 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:13.018583 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:15.034701 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:17.518080 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:19.518117 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:21.519948 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:24.019076 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:26.518277 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:28.518831 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:31.018350 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:33.018712 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:35.018865 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:37.517827 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:39.518480 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:42.019312 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:44.518131 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:47.018280 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:49.024989 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:51.517924 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:54.018618 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:56.517689 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:10:59.018227 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:01.524960 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:04.018498 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:06.025960 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:08.517911 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:10.518519 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:13.018958 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:15.024260 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:17.518355 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:20.019393 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:22.517523 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:25.018884 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:27.019665 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:29.518740 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:32.018448 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:34.019297 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:36.019432 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:38.517464 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:40.517563 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:42.518114 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:45.035883 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:47.519429 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:49.519613 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:52.018642 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:54.019123 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:56.023259 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:11:58.521856 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:01.018431 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:03.019393 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:05.518268 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:07.518777 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:10.018659 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:12.018930 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:14.019062 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:16.518042 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:19.018220 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:21.022128 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:23.517769 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:25.518098 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:27.518920 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:29.519038 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:32.018387 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:34.018766 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:36.520151 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:39.018650 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:41.018688 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:43.022284 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:45.024154 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:47.517471 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:49.524362 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:52.020300 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:54.023580 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:56.519249 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:12:59.017940 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:01.517653 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:03.518901 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:05.518934 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:08.018009 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:10.021706 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:12.518975 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:15.025684 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:17.517575 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:19.518488 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:21.642428 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:24.019247 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:26.518592 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:29.018604 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:31.019092 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:33.517404 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:35.519009 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:38.019247 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:40.020987 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:42.518208 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:44.521460 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:47.019706 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:49.518424 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:52.018575 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:54.019770 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:55.026998 921861 pod_ready.go:81] duration metric: took 4m0.0153977s for pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace to be "Ready" ...
E0717 20:13:55.027023 921861 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0717 20:13:55.027033 921861 pod_ready.go:38] duration metric: took 5m25.908864881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0717 20:13:55.027259 921861 api_server.go:52] waiting for apiserver process to appear ...
I0717 20:13:55.027292 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0717 20:13:55.027364 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0717 20:13:55.096172 921861 cri.go:89] found id: "75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a"
I0717 20:13:55.096196 921861 cri.go:89] found id: "f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:13:55.096202 921861 cri.go:89] found id: ""
I0717 20:13:55.096209 921861 logs.go:276] 2 containers: [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc]
I0717 20:13:55.096268 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.106201 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.113900 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0717 20:13:55.114023 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0717 20:13:55.204482 921861 cri.go:89] found id: "ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422"
I0717 20:13:55.204501 921861 cri.go:89] found id: "ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:13:55.204506 921861 cri.go:89] found id: ""
I0717 20:13:55.204514 921861 logs.go:276] 2 containers: [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422 ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151]
I0717 20:13:55.204571 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.214697 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.219565 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0717 20:13:55.219649 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0717 20:13:55.303542 921861 cri.go:89] found id: "2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd"
I0717 20:13:55.303570 921861 cri.go:89] found id: "c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:13:55.303577 921861 cri.go:89] found id: ""
I0717 20:13:55.303592 921861 logs.go:276] 2 containers: [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549]
I0717 20:13:55.303649 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.310040 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.314477 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0717 20:13:55.314561 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0717 20:13:55.383864 921861 cri.go:89] found id: "0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3"
I0717 20:13:55.383897 921861 cri.go:89] found id: "9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:13:55.383902 921861 cri.go:89] found id: ""
I0717 20:13:55.383910 921861 logs.go:276] 2 containers: [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3 9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697]
I0717 20:13:55.383976 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.390821 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.402017 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0717 20:13:55.402115 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0717 20:13:55.486436 921861 cri.go:89] found id: "66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e"
I0717 20:13:55.486459 921861 cri.go:89] found id: "6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:13:55.486473 921861 cri.go:89] found id: ""
I0717 20:13:55.486481 921861 logs.go:276] 2 containers: [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e 6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15]
I0717 20:13:55.486564 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.494243 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.504164 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0717 20:13:55.504245 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0717 20:13:55.582028 921861 cri.go:89] found id: "54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2"
I0717 20:13:55.582076 921861 cri.go:89] found id: "ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:13:55.582082 921861 cri.go:89] found id: ""
I0717 20:13:55.582093 921861 logs.go:276] 2 containers: [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2 ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3]
I0717 20:13:55.582167 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.591513 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.596286 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0717 20:13:55.596367 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0717 20:13:55.665385 921861 cri.go:89] found id: "6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead"
I0717 20:13:55.665404 921861 cri.go:89] found id: "c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:13:55.665410 921861 cri.go:89] found id: ""
I0717 20:13:55.665417 921861 logs.go:276] 2 containers: [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318]
I0717 20:13:55.665488 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.674833 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.682477 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0717 20:13:55.682546 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0717 20:13:55.793123 921861 cri.go:89] found id: "c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d"
I0717 20:13:55.793144 921861 cri.go:89] found id: "d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad"
I0717 20:13:55.793149 921861 cri.go:89] found id: ""
I0717 20:13:55.793157 921861 logs.go:276] 2 containers: [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad]
I0717 20:13:55.793223 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.798047 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.802770 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0717 20:13:55.802958 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0717 20:13:55.902238 921861 cri.go:89] found id: "53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71"
I0717 20:13:55.902267 921861 cri.go:89] found id: ""
I0717 20:13:55.902286 921861 logs.go:276] 1 containers: [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71]
I0717 20:13:55.902361 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.907275 921861 logs.go:123] Gathering logs for kube-proxy [6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15] ...
I0717 20:13:55.907306 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:13:55.974153 921861 logs.go:123] Gathering logs for kube-controller-manager [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2] ...
I0717 20:13:55.974188 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2"
I0717 20:13:56.194545 921861 logs.go:123] Gathering logs for kindnet [c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318] ...
I0717 20:13:56.194586 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:13:56.324440 921861 logs.go:123] Gathering logs for storage-provisioner [d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad] ...
I0717 20:13:56.324519 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad"
I0717 20:13:56.374460 921861 logs.go:123] Gathering logs for kubernetes-dashboard [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71] ...
I0717 20:13:56.374485 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71"
I0717 20:13:56.429747 921861 logs.go:123] Gathering logs for kubelet ...
I0717 20:13:56.429819 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0717 20:13:56.486016 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:30 old-k8s-version-706521 kubelet[662]: E0717 20:08:30.661470 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.486228 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:30 old-k8s-version-706521 kubelet[662]: E0717 20:08:30.828892 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.489701 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:46 old-k8s-version-706521 kubelet[662]: E0717 20:08:46.565471 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.490071 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:47 old-k8s-version-706521 kubelet[662]: E0717 20:08:47.951245 662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-97vqh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-97vqh" is forbidden: User "system:node:old-k8s-version-706521" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-706521' and this object
W0717 20:13:56.492040 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:59 old-k8s-version-706521 kubelet[662]: E0717 20:08:59.963531 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.492517 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:00 old-k8s-version-706521 kubelet[662]: E0717 20:09:00.968809 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.492710 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:01 old-k8s-version-706521 kubelet[662]: E0717 20:09:01.552815 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.493508 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:02 old-k8s-version-706521 kubelet[662]: E0717 20:09:02.015089 662 pod_workers.go:191] Error syncing pod acbb1d8e-4bf9-4590-b17d-5cb03849d6a4 ("storage-provisioner_kube-system(acbb1d8e-4bf9-4590-b17d-5cb03849d6a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(acbb1d8e-4bf9-4590-b17d-5cb03849d6a4)"
W0717 20:13:56.493842 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:04 old-k8s-version-706521 kubelet[662]: E0717 20:09:04.098038 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.496729 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:15 old-k8s-version-706521 kubelet[662]: E0717 20:09:15.561998 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.497465 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:19 old-k8s-version-706521 kubelet[662]: E0717 20:09:19.118357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.497801 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:24 old-k8s-version-706521 kubelet[662]: E0717 20:09:24.098214 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.497989 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:26 old-k8s-version-706521 kubelet[662]: E0717 20:09:26.558472 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.498326 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:36 old-k8s-version-706521 kubelet[662]: E0717 20:09:36.552738 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.498514 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:37 old-k8s-version-706521 kubelet[662]: E0717 20:09:37.562445 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.498854 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:49 old-k8s-version-706521 kubelet[662]: E0717 20:09:49.559705 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.499351 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:50 old-k8s-version-706521 kubelet[662]: E0717 20:09:50.204203 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.499701 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:54 old-k8s-version-706521 kubelet[662]: E0717 20:09:54.098178 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.502351 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:01 old-k8s-version-706521 kubelet[662]: E0717 20:10:01.562475 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.502703 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:06 old-k8s-version-706521 kubelet[662]: E0717 20:10:06.556299 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.502910 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:15 old-k8s-version-706521 kubelet[662]: E0717 20:10:15.552476 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.503246 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:18 old-k8s-version-706521 kubelet[662]: E0717 20:10:18.552987 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.503663 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:26 old-k8s-version-706521 kubelet[662]: E0717 20:10:26.552937 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.504022 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:29 old-k8s-version-706521 kubelet[662]: E0717 20:10:29.552164 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.504216 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:38 old-k8s-version-706521 kubelet[662]: E0717 20:10:38.554109 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.504923 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:42 old-k8s-version-706521 kubelet[662]: E0717 20:10:42.348760 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.505274 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:44 old-k8s-version-706521 kubelet[662]: E0717 20:10:44.097722 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.505483 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:51 old-k8s-version-706521 kubelet[662]: E0717 20:10:51.552421 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.505837 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:59 old-k8s-version-706521 kubelet[662]: E0717 20:10:59.552527 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.506029 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:03 old-k8s-version-706521 kubelet[662]: E0717 20:11:03.552379 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.506365 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:13 old-k8s-version-706521 kubelet[662]: E0717 20:11:13.552539 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.506564 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:17 old-k8s-version-706521 kubelet[662]: E0717 20:11:17.552652 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.506935 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:25 old-k8s-version-706521 kubelet[662]: E0717 20:11:25.552153 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.509562 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:32 old-k8s-version-706521 kubelet[662]: E0717 20:11:32.561515 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.509932 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:38 old-k8s-version-706521 kubelet[662]: E0717 20:11:38.552601 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.510125 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:43 old-k8s-version-706521 kubelet[662]: E0717 20:11:43.552468 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.510466 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:49 old-k8s-version-706521 kubelet[662]: E0717 20:11:49.552132 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.510654 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:58 old-k8s-version-706521 kubelet[662]: E0717 20:11:58.552478 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.511368 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:04 old-k8s-version-706521 kubelet[662]: E0717 20:12:04.568005 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.511566 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:12 old-k8s-version-706521 kubelet[662]: E0717 20:12:12.552438 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.511917 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:14 old-k8s-version-706521 kubelet[662]: E0717 20:12:14.098192 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.512263 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:25 old-k8s-version-706521 kubelet[662]: E0717 20:12:25.552632 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.512455 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:25 old-k8s-version-706521 kubelet[662]: E0717 20:12:25.552990 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.512646 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:36 old-k8s-version-706521 kubelet[662]: E0717 20:12:36.552671 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.513015 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:36 old-k8s-version-706521 kubelet[662]: E0717 20:12:36.553533 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.513239 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:47 old-k8s-version-706521 kubelet[662]: E0717 20:12:47.552573 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.513583 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:48 old-k8s-version-706521 kubelet[662]: E0717 20:12:48.552271 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.513775 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:01 old-k8s-version-706521 kubelet[662]: E0717 20:13:01.552554 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.514171 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:03 old-k8s-version-706521 kubelet[662]: E0717 20:13:03.552206 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.514363 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:14 old-k8s-version-706521 kubelet[662]: E0717 20:13:14.555362 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.514705 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:18 old-k8s-version-706521 kubelet[662]: E0717 20:13:18.552316 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.514915 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:25 old-k8s-version-706521 kubelet[662]: E0717 20:13:25.554625 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.515257 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: E0717 20:13:32.553357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.515445 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.515818 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.516033 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0717 20:13:56.516083 921861 logs.go:123] Gathering logs for kube-apiserver [f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc] ...
I0717 20:13:56.516105 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:13:56.603364 921861 logs.go:123] Gathering logs for coredns [c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549] ...
I0717 20:13:56.603396 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:13:56.650145 921861 logs.go:123] Gathering logs for kindnet [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead] ...
I0717 20:13:56.650174 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead"
I0717 20:13:56.737339 921861 logs.go:123] Gathering logs for storage-provisioner [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d] ...
I0717 20:13:56.737380 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d"
I0717 20:13:56.783307 921861 logs.go:123] Gathering logs for containerd ...
I0717 20:13:56.783337 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0717 20:13:56.852823 921861 logs.go:123] Gathering logs for coredns [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd] ...
I0717 20:13:56.852908 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd"
I0717 20:13:56.903677 921861 logs.go:123] Gathering logs for kube-scheduler [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3] ...
I0717 20:13:56.903750 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3"
I0717 20:13:56.960337 921861 logs.go:123] Gathering logs for kube-controller-manager [ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3] ...
I0717 20:13:56.960406 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:13:57.050649 921861 logs.go:123] Gathering logs for dmesg ...
I0717 20:13:57.050686 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0717 20:13:57.072456 921861 logs.go:123] Gathering logs for kube-apiserver [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a] ...
I0717 20:13:57.072498 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a"
I0717 20:13:57.139790 921861 logs.go:123] Gathering logs for kube-scheduler [9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697] ...
I0717 20:13:57.139825 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:13:57.186897 921861 logs.go:123] Gathering logs for kube-proxy [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e] ...
I0717 20:13:57.186927 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e"
I0717 20:13:57.226201 921861 logs.go:123] Gathering logs for container status ...
I0717 20:13:57.226231 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0717 20:13:57.269422 921861 logs.go:123] Gathering logs for describe nodes ...
I0717 20:13:57.269452 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0717 20:13:57.411433 921861 logs.go:123] Gathering logs for etcd [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422] ...
I0717 20:13:57.411466 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422"
I0717 20:13:57.452643 921861 logs.go:123] Gathering logs for etcd [ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151] ...
I0717 20:13:57.452674 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:13:57.491246 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:13:57.491275 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0717 20:13:57.491328 921861 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0717 20:13:57.491342 921861 out.go:239] Jul 17 20:13:25 old-k8s-version-706521 kubelet[662]: E0717 20:13:25.554625 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:25 old-k8s-version-706521 kubelet[662]: E0717 20:13:25.554625 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:57.491352 921861 out.go:239] Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: E0717 20:13:32.553357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: E0717 20:13:32.553357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:57.491364 921861 out.go:239] Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:57.491372 921861 out.go:239] Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:57.491382 921861 out.go:239] Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0717 20:13:57.491388 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:13:57.491401 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:14:07.493042 921861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0717 20:14:07.512178 921861 api_server.go:72] duration metric: took 5m54.662260002s to wait for apiserver process to appear ...
I0717 20:14:07.512202 921861 api_server.go:88] waiting for apiserver healthz status ...
I0717 20:14:07.512237 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0717 20:14:07.512289 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0717 20:14:07.562714 921861 cri.go:89] found id: "75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a"
I0717 20:14:07.562742 921861 cri.go:89] found id: "f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:14:07.562748 921861 cri.go:89] found id: ""
I0717 20:14:07.562755 921861 logs.go:276] 2 containers: [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc]
I0717 20:14:07.562824 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.566741 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.570378 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0717 20:14:07.570441 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0717 20:14:07.617037 921861 cri.go:89] found id: "ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422"
I0717 20:14:07.617061 921861 cri.go:89] found id: "ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:14:07.617069 921861 cri.go:89] found id: ""
I0717 20:14:07.617077 921861 logs.go:276] 2 containers: [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422 ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151]
I0717 20:14:07.617132 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.621162 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.625269 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0717 20:14:07.625335 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0717 20:14:07.683294 921861 cri.go:89] found id: "2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd"
I0717 20:14:07.683313 921861 cri.go:89] found id: "c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:14:07.683318 921861 cri.go:89] found id: ""
I0717 20:14:07.683325 921861 logs.go:276] 2 containers: [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549]
I0717 20:14:07.683383 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.687487 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.691418 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0717 20:14:07.691484 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0717 20:14:07.745936 921861 cri.go:89] found id: "0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3"
I0717 20:14:07.745956 921861 cri.go:89] found id: "9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:14:07.745961 921861 cri.go:89] found id: ""
I0717 20:14:07.745968 921861 logs.go:276] 2 containers: [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3 9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697]
I0717 20:14:07.746028 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.750020 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.753707 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0717 20:14:07.753770 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0717 20:14:07.806380 921861 cri.go:89] found id: "66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e"
I0717 20:14:07.806403 921861 cri.go:89] found id: "6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:14:07.806414 921861 cri.go:89] found id: ""
I0717 20:14:07.806421 921861 logs.go:276] 2 containers: [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e 6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15]
I0717 20:14:07.806474 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.810664 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.814569 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0717 20:14:07.814693 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0717 20:14:07.880269 921861 cri.go:89] found id: "54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2"
I0717 20:14:07.880351 921861 cri.go:89] found id: "ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:14:07.880374 921861 cri.go:89] found id: ""
I0717 20:14:07.880411 921861 logs.go:276] 2 containers: [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2 ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3]
I0717 20:14:07.880497 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.884980 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.888721 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0717 20:14:07.888842 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0717 20:14:07.940015 921861 cri.go:89] found id: "6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead"
I0717 20:14:07.940090 921861 cri.go:89] found id: "c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:14:07.940110 921861 cri.go:89] found id: ""
I0717 20:14:07.940132 921861 logs.go:276] 2 containers: [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318]
I0717 20:14:07.940220 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.951143 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.954868 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0717 20:14:07.955006 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0717 20:14:08.017613 921861 cri.go:89] found id: "53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71"
I0717 20:14:08.017686 921861 cri.go:89] found id: ""
I0717 20:14:08.017714 921861 logs.go:276] 1 containers: [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71]
I0717 20:14:08.017805 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:08.022399 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0717 20:14:08.022533 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0717 20:14:08.099451 921861 cri.go:89] found id: "c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d"
I0717 20:14:08.099529 921861 cri.go:89] found id: "d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad"
I0717 20:14:08.099549 921861 cri.go:89] found id: ""
I0717 20:14:08.099572 921861 logs.go:276] 2 containers: [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad]
I0717 20:14:08.099683 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:08.104380 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:08.110423 921861 logs.go:123] Gathering logs for etcd [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422] ...
I0717 20:14:08.110447 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422"
I0717 20:14:08.177584 921861 logs.go:123] Gathering logs for kube-proxy [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e] ...
I0717 20:14:08.177658 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e"
I0717 20:14:08.227350 921861 logs.go:123] Gathering logs for kindnet [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead] ...
I0717 20:14:08.227427 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead"
I0717 20:14:08.343745 921861 logs.go:123] Gathering logs for kubernetes-dashboard [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71] ...
I0717 20:14:08.343780 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71"
I0717 20:14:08.413539 921861 logs.go:123] Gathering logs for storage-provisioner [d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad] ...
I0717 20:14:08.413570 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad"
I0717 20:14:08.461679 921861 logs.go:123] Gathering logs for container status ...
I0717 20:14:08.461713 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0717 20:14:08.543346 921861 logs.go:123] Gathering logs for kubelet ...
I0717 20:14:08.543377 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0717 20:14:08.621760 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:30 old-k8s-version-706521 kubelet[662]: E0717 20:08:30.661470 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.621988 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:30 old-k8s-version-706521 kubelet[662]: E0717 20:08:30.828892 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.625852 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:46 old-k8s-version-706521 kubelet[662]: E0717 20:08:46.565471 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.626261 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:47 old-k8s-version-706521 kubelet[662]: E0717 20:08:47.951245 662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-97vqh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-97vqh" is forbidden: User "system:node:old-k8s-version-706521" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-706521' and this object
W0717 20:14:08.628352 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:59 old-k8s-version-706521 kubelet[662]: E0717 20:08:59.963531 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.628854 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:00 old-k8s-version-706521 kubelet[662]: E0717 20:09:00.968809 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.629068 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:01 old-k8s-version-706521 kubelet[662]: E0717 20:09:01.552815 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.629971 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:02 old-k8s-version-706521 kubelet[662]: E0717 20:09:02.015089 662 pod_workers.go:191] Error syncing pod acbb1d8e-4bf9-4590-b17d-5cb03849d6a4 ("storage-provisioner_kube-system(acbb1d8e-4bf9-4590-b17d-5cb03849d6a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(acbb1d8e-4bf9-4590-b17d-5cb03849d6a4)"
W0717 20:14:08.630343 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:04 old-k8s-version-706521 kubelet[662]: E0717 20:09:04.098038 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.633441 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:15 old-k8s-version-706521 kubelet[662]: E0717 20:09:15.561998 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.634294 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:19 old-k8s-version-706521 kubelet[662]: E0717 20:09:19.118357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.634663 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:24 old-k8s-version-706521 kubelet[662]: E0717 20:09:24.098214 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.634890 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:26 old-k8s-version-706521 kubelet[662]: E0717 20:09:26.558472 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.635270 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:36 old-k8s-version-706521 kubelet[662]: E0717 20:09:36.552738 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.635558 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:37 old-k8s-version-706521 kubelet[662]: E0717 20:09:37.562445 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.635910 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:49 old-k8s-version-706521 kubelet[662]: E0717 20:09:49.559705 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.636414 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:50 old-k8s-version-706521 kubelet[662]: E0717 20:09:50.204203 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.636847 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:54 old-k8s-version-706521 kubelet[662]: E0717 20:09:54.098178 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.639637 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:01 old-k8s-version-706521 kubelet[662]: E0717 20:10:01.562475 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.640283 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:06 old-k8s-version-706521 kubelet[662]: E0717 20:10:06.556299 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.640568 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:15 old-k8s-version-706521 kubelet[662]: E0717 20:10:15.552476 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.640939 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:18 old-k8s-version-706521 kubelet[662]: E0717 20:10:18.552987 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.641153 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:26 old-k8s-version-706521 kubelet[662]: E0717 20:10:26.552937 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.641509 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:29 old-k8s-version-706521 kubelet[662]: E0717 20:10:29.552164 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.641718 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:38 old-k8s-version-706521 kubelet[662]: E0717 20:10:38.554109 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.642392 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:42 old-k8s-version-706521 kubelet[662]: E0717 20:10:42.348760 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.642757 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:44 old-k8s-version-706521 kubelet[662]: E0717 20:10:44.097722 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.642994 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:51 old-k8s-version-706521 kubelet[662]: E0717 20:10:51.552421 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.643351 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:59 old-k8s-version-706521 kubelet[662]: E0717 20:10:59.552527 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.643561 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:03 old-k8s-version-706521 kubelet[662]: E0717 20:11:03.552379 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.643997 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:13 old-k8s-version-706521 kubelet[662]: E0717 20:11:13.552539 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.644210 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:17 old-k8s-version-706521 kubelet[662]: E0717 20:11:17.552652 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.644563 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:25 old-k8s-version-706521 kubelet[662]: E0717 20:11:25.552153 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.647180 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:32 old-k8s-version-706521 kubelet[662]: E0717 20:11:32.561515 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.647557 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:38 old-k8s-version-706521 kubelet[662]: E0717 20:11:38.552601 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.649442 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:43 old-k8s-version-706521 kubelet[662]: E0717 20:11:43.552468 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.649834 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:49 old-k8s-version-706521 kubelet[662]: E0717 20:11:49.552132 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.650047 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:58 old-k8s-version-706521 kubelet[662]: E0717 20:11:58.552478 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.650670 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:04 old-k8s-version-706521 kubelet[662]: E0717 20:12:04.568005 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.650890 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:12 old-k8s-version-706521 kubelet[662]: E0717 20:12:12.552438 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.651245 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:14 old-k8s-version-706521 kubelet[662]: E0717 20:12:14.098192 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.651620 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:25 old-k8s-version-706521 kubelet[662]: E0717 20:12:25.552632 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.651840 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:25 old-k8s-version-706521 kubelet[662]: E0717 20:12:25.552990 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.652052 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:36 old-k8s-version-706521 kubelet[662]: E0717 20:12:36.552671 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.652453 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:36 old-k8s-version-706521 kubelet[662]: E0717 20:12:36.553533 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.652686 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:47 old-k8s-version-706521 kubelet[662]: E0717 20:12:47.552573 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.653076 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:48 old-k8s-version-706521 kubelet[662]: E0717 20:12:48.552271 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.653339 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:01 old-k8s-version-706521 kubelet[662]: E0717 20:13:01.552554 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.653753 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:03 old-k8s-version-706521 kubelet[662]: E0717 20:13:03.552206 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.653966 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:14 old-k8s-version-706521 kubelet[662]: E0717 20:13:14.555362 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.654323 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:18 old-k8s-version-706521 kubelet[662]: E0717 20:13:18.552316 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.654532 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:25 old-k8s-version-706521 kubelet[662]: E0717 20:13:25.554625 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.654898 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: E0717 20:13:32.553357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.655106 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.655482 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.655700 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.656099 921861 logs.go:138] Found kubelet problem: Jul 17 20:14:01 old-k8s-version-706521 kubelet[662]: E0717 20:14:01.552159 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.656322 921861 logs.go:138] Found kubelet problem: Jul 17 20:14:02 old-k8s-version-706521 kubelet[662]: E0717 20:14:02.552503 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0717 20:14:08.656338 921861 logs.go:123] Gathering logs for describe nodes ...
I0717 20:14:08.656364 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0717 20:14:08.845780 921861 logs.go:123] Gathering logs for kube-apiserver [f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc] ...
I0717 20:14:08.845821 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:14:08.943640 921861 logs.go:123] Gathering logs for etcd [ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151] ...
I0717 20:14:08.943689 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:14:09.004417 921861 logs.go:123] Gathering logs for kube-proxy [6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15] ...
I0717 20:14:09.004455 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:14:09.064477 921861 logs.go:123] Gathering logs for kube-controller-manager [ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3] ...
I0717 20:14:09.064509 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:14:09.180668 921861 logs.go:123] Gathering logs for containerd ...
I0717 20:14:09.180709 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0717 20:14:09.249534 921861 logs.go:123] Gathering logs for dmesg ...
I0717 20:14:09.249575 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0717 20:14:09.271241 921861 logs.go:123] Gathering logs for kube-apiserver [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a] ...
I0717 20:14:09.271272 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a"
I0717 20:14:09.344317 921861 logs.go:123] Gathering logs for coredns [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd] ...
I0717 20:14:09.344376 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd"
I0717 20:14:09.403848 921861 logs.go:123] Gathering logs for kube-scheduler [9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697] ...
I0717 20:14:09.403882 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:14:09.461187 921861 logs.go:123] Gathering logs for kube-controller-manager [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2] ...
I0717 20:14:09.461221 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2"
I0717 20:14:09.556298 921861 logs.go:123] Gathering logs for kindnet [c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318] ...
I0717 20:14:09.556344 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:14:09.630480 921861 logs.go:123] Gathering logs for storage-provisioner [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d] ...
I0717 20:14:09.630516 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d"
I0717 20:14:09.689256 921861 logs.go:123] Gathering logs for coredns [c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549] ...
I0717 20:14:09.689287 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:14:09.743850 921861 logs.go:123] Gathering logs for kube-scheduler [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3] ...
I0717 20:14:09.743882 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3"
I0717 20:14:09.795970 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:14:09.795994 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0717 20:14:09.796039 921861 out.go:239] X Problems detected in kubelet:
X Problems detected in kubelet:
W0717 20:14:09.796055 921861 out.go:239] Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:09.796062 921861 out.go:239] Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:09.796078 921861 out.go:239] Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:09.796085 921861 out.go:239] Jul 17 20:14:01 old-k8s-version-706521 kubelet[662]: E0717 20:14:01.552159 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:14:01 old-k8s-version-706521 kubelet[662]: E0717 20:14:01.552159 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:09.796102 921861 out.go:239] Jul 17 20:14:02 old-k8s-version-706521 kubelet[662]: E0717 20:14:02.552503 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:14:02 old-k8s-version-706521 kubelet[662]: E0717 20:14:02.552503 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0717 20:14:09.796110 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:14:09.796119 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:14:19.796705 921861 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0717 20:14:19.809529 921861 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0717 20:14:19.811776 921861 out.go:177]
W0717 20:14:19.813922 921861 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0717 20:14:19.813962 921861 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
* Suggestion: Control Plane could not update, try minikube delete --all --purge
W0717 20:14:19.813979 921861 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
* Related issue: https://github.com/kubernetes/minikube/issues/11417
W0717 20:14:19.813985 921861 out.go:239] *
*
W0717 20:14:19.815148 921861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 20:14:19.817411 921861 out.go:177]
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-706521 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run: docker inspect old-k8s-version-706521
helpers_test.go:235: (dbg) docker inspect old-k8s-version-706521:
-- stdout --
[
{
"Id": "24d95606673dae41cf05a387d8d4ae503709aa08ee5d6a3c108ed609f6836ff2",
"Created": "2024-07-17T20:04:45.173211681Z",
"Path": "/usr/local/bin/entrypoint",
"Args": [
"/sbin/init"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 922056,
"ExitCode": 0,
"Error": "",
"StartedAt": "2024-07-17T20:08:04.252290556Z",
"FinishedAt": "2024-07-17T20:08:03.063781396Z"
},
"Image": "sha256:476b38520acaa45848ac08864bd6ca4a7124b7e691863e24807ecda76b00d113",
"ResolvConfPath": "/var/lib/docker/containers/24d95606673dae41cf05a387d8d4ae503709aa08ee5d6a3c108ed609f6836ff2/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/24d95606673dae41cf05a387d8d4ae503709aa08ee5d6a3c108ed609f6836ff2/hostname",
"HostsPath": "/var/lib/docker/containers/24d95606673dae41cf05a387d8d4ae503709aa08ee5d6a3c108ed609f6836ff2/hosts",
"LogPath": "/var/lib/docker/containers/24d95606673dae41cf05a387d8d4ae503709aa08ee5d6a3c108ed609f6836ff2/24d95606673dae41cf05a387d8d4ae503709aa08ee5d6a3c108ed609f6836ff2-json.log",
"Name": "/old-k8s-version-706521",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "unconfined",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/lib/modules:/lib/modules:ro",
"old-k8s-version-706521:/var"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "old-k8s-version-706521",
"PortBindings": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": ""
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"ConsoleSize": [
0,
0
],
"CapAdd": null,
"CapDrop": null,
"CgroupnsMode": "host",
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "private",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": true,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": [
"seccomp=unconfined",
"apparmor=unconfined",
"label=disable"
],
"Tmpfs": {
"/run": "",
"/tmp": ""
},
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"Isolation": "",
"CpuShares": 0,
"Memory": 2306867200,
"NanoCpus": 2000000000,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": [],
"BlkioDeviceWriteBps": [],
"BlkioDeviceReadIOps": [],
"BlkioDeviceWriteIOps": [],
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"MemoryReservation": 0,
"MemorySwap": 4613734400,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": [],
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": null,
"ReadonlyPaths": null
},
"GraphDriver": {
"Data": {
"LowerDir": "/var/lib/docker/overlay2/09e9d89feca269bcf7b00cb8fdca5cda2329525436cf145e2f5f298472824834-init/diff:/var/lib/docker/overlay2/d2869a27707970b1467c2d11b9b14300d09748d022d571d0f51728cfcc5409a4/diff",
"MergedDir": "/var/lib/docker/overlay2/09e9d89feca269bcf7b00cb8fdca5cda2329525436cf145e2f5f298472824834/merged",
"UpperDir": "/var/lib/docker/overlay2/09e9d89feca269bcf7b00cb8fdca5cda2329525436cf145e2f5f298472824834/diff",
"WorkDir": "/var/lib/docker/overlay2/09e9d89feca269bcf7b00cb8fdca5cda2329525436cf145e2f5f298472824834/work"
},
"Name": "overlay2"
},
"Mounts": [
{
"Type": "bind",
"Source": "/lib/modules",
"Destination": "/lib/modules",
"Mode": "ro",
"RW": false,
"Propagation": "rprivate"
},
{
"Type": "volume",
"Name": "old-k8s-version-706521",
"Source": "/var/lib/docker/volumes/old-k8s-version-706521/_data",
"Destination": "/var",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
"Config": {
"Hostname": "old-k8s-version-706521",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"22/tcp": {},
"2376/tcp": {},
"32443/tcp": {},
"5000/tcp": {},
"8443/tcp": {}
},
"Tty": true,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"container=docker",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
],
"Cmd": null,
"Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e",
"Volumes": null,
"WorkingDir": "/",
"Entrypoint": [
"/usr/local/bin/entrypoint",
"/sbin/init"
],
"OnBuild": null,
"Labels": {
"created_by.minikube.sigs.k8s.io": "true",
"mode.minikube.sigs.k8s.io": "old-k8s-version-706521",
"name.minikube.sigs.k8s.io": "old-k8s-version-706521",
"role.minikube.sigs.k8s.io": ""
},
"StopSignal": "SIGRTMIN+3"
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "29811773bcf23770c937611045b65013b90ae51e8273e2a9802b634fb893f9f7",
"SandboxKey": "/var/run/docker/netns/29811773bcf2",
"Ports": {
"22/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33824"
}
],
"2376/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33825"
}
],
"32443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33828"
}
],
"5000/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33826"
}
],
"8443/tcp": [
{
"HostIp": "127.0.0.1",
"HostPort": "33827"
}
]
},
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"old-k8s-version-706521": {
"IPAMConfig": {
"IPv4Address": "192.168.76.2"
},
"Links": null,
"Aliases": null,
"MacAddress": "02:42:c0:a8:4c:02",
"DriverOpts": null,
"NetworkID": "339e81622c2015693b114f70c0d458a8b7c4f2fbd19d294a38fce1fbf19ac403",
"EndpointID": "5e05af2a5aa9fdbd92035e364c620e37e22f06ea2727579b613e279bfc8c35ac",
"Gateway": "192.168.76.1",
"IPAddress": "192.168.76.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"DNSNames": [
"old-k8s-version-706521",
"24d95606673d"
]
}
}
}
}
]
-- /stdout --
helpers_test.go:239: (dbg) Run: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-706521 -n old-k8s-version-706521
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run: out/minikube-linux-arm64 -p old-k8s-version-706521 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-706521 logs -n 25: (2.738335358s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs:
-- stdout --
==> Audit <==
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
| start | -p cert-expiration-906631 | cert-expiration-906631 | jenkins | v1.33.1 | 17 Jul 24 20:03 UTC | 17 Jul 24 20:04 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=3m | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | force-systemd-env-893060 | force-systemd-env-893060 | jenkins | v1.33.1 | 17 Jul 24 20:03 UTC | 17 Jul 24 20:03 UTC |
| | ssh cat | | | | | |
| | /etc/containerd/config.toml | | | | | |
| delete | -p force-systemd-env-893060 | force-systemd-env-893060 | jenkins | v1.33.1 | 17 Jul 24 20:03 UTC | 17 Jul 24 20:03 UTC |
| start | -p cert-options-602718 | cert-options-602718 | jenkins | v1.33.1 | 17 Jul 24 20:03 UTC | 17 Jul 24 20:04 UTC |
| | --memory=2048 | | | | | |
| | --apiserver-ips=127.0.0.1 | | | | | |
| | --apiserver-ips=192.168.15.15 | | | | | |
| | --apiserver-names=localhost | | | | | |
| | --apiserver-names=www.google.com | | | | | |
| | --apiserver-port=8555 | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| ssh | cert-options-602718 ssh | cert-options-602718 | jenkins | v1.33.1 | 17 Jul 24 20:04 UTC | 17 Jul 24 20:04 UTC |
| | openssl x509 -text -noout -in | | | | | |
| | /var/lib/minikube/certs/apiserver.crt | | | | | |
| ssh | -p cert-options-602718 -- sudo | cert-options-602718 | jenkins | v1.33.1 | 17 Jul 24 20:04 UTC | 17 Jul 24 20:04 UTC |
| | cat /etc/kubernetes/admin.conf | | | | | |
| delete | -p cert-options-602718 | cert-options-602718 | jenkins | v1.33.1 | 17 Jul 24 20:04 UTC | 17 Jul 24 20:04 UTC |
| start | -p old-k8s-version-706521 | old-k8s-version-706521 | jenkins | v1.33.1 | 17 Jul 24 20:04 UTC | 17 Jul 24 20:07 UTC |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| start | -p cert-expiration-906631 | cert-expiration-906631 | jenkins | v1.33.1 | 17 Jul 24 20:07 UTC | 17 Jul 24 20:07 UTC |
| | --memory=2048 | | | | | |
| | --cert-expiration=8760h | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| delete | -p cert-expiration-906631 | cert-expiration-906631 | jenkins | v1.33.1 | 17 Jul 24 20:07 UTC | 17 Jul 24 20:07 UTC |
| start | -p no-preload-835984 --memory=2200 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:07 UTC | 17 Jul 24 20:08 UTC |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| addons | enable metrics-server -p old-k8s-version-706521 | old-k8s-version-706521 | jenkins | v1.33.1 | 17 Jul 24 20:07 UTC | 17 Jul 24 20:07 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p old-k8s-version-706521 | old-k8s-version-706521 | jenkins | v1.33.1 | 17 Jul 24 20:07 UTC | 17 Jul 24 20:08 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p old-k8s-version-706521 | old-k8s-version-706521 | jenkins | v1.33.1 | 17 Jul 24 20:08 UTC | 17 Jul 24 20:08 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p old-k8s-version-706521 | old-k8s-version-706521 | jenkins | v1.33.1 | 17 Jul 24 20:08 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --kvm-network=default | | | | | |
| | --kvm-qemu-uri=qemu:///system | | | | | |
| | --disable-driver-mounts | | | | | |
| | --keep-context=false | | | | | |
| | --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.20.0 | | | | | |
| addons | enable metrics-server -p no-preload-835984 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:08 UTC | 17 Jul 24 20:08 UTC |
| | --images=MetricsServer=registry.k8s.io/echoserver:1.4 | | | | | |
| | --registries=MetricsServer=fake.domain | | | | | |
| stop | -p no-preload-835984 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:08 UTC | 17 Jul 24 20:09 UTC |
| | --alsologtostderr -v=3 | | | | | |
| addons | enable dashboard -p no-preload-835984 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:09 UTC | 17 Jul 24 20:09 UTC |
| | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 | | | | | |
| start | -p no-preload-835984 --memory=2200 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:09 UTC | 17 Jul 24 20:13 UTC |
| | --alsologtostderr --wait=true | | | | | |
| | --preload=false --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.31.0-beta.0 | | | | | |
| image | no-preload-835984 image list | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:13 UTC | 17 Jul 24 20:13 UTC |
| | --format=json | | | | | |
| pause | -p no-preload-835984 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:13 UTC | 17 Jul 24 20:13 UTC |
| | --alsologtostderr -v=1 | | | | | |
| unpause | -p no-preload-835984 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:13 UTC | 17 Jul 24 20:13 UTC |
| | --alsologtostderr -v=1 | | | | | |
| delete | -p no-preload-835984 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:13 UTC | 17 Jul 24 20:13 UTC |
| delete | -p no-preload-835984 | no-preload-835984 | jenkins | v1.33.1 | 17 Jul 24 20:13 UTC | 17 Jul 24 20:13 UTC |
| start | -p embed-certs-362122 | embed-certs-362122 | jenkins | v1.33.1 | 17 Jul 24 20:13 UTC | |
| | --memory=2200 | | | | | |
| | --alsologtostderr --wait=true | | | | | |
| | --embed-certs --driver=docker | | | | | |
| | --container-runtime=containerd | | | | | |
| | --kubernetes-version=v1.30.2 | | | | | |
|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
==> Last Start <==
Log file created at: 2024/07/17 20:13:47
Running on machine: ip-172-31-21-244
Binary: Built with gc go1.22.5 for linux/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0717 20:13:47.789309 932216 out.go:291] Setting OutFile to fd 1 ...
I0717 20:13:47.789489 932216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:13:47.789500 932216 out.go:304] Setting ErrFile to fd 2...
I0717 20:13:47.789506 932216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:13:47.789758 932216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19283-709197/.minikube/bin
I0717 20:13:47.790166 932216 out.go:298] Setting JSON to false
I0717 20:13:47.791328 932216 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":14176,"bootTime":1721233052,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1064-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
I0717 20:13:47.791401 932216 start.go:139] virtualization:
I0717 20:13:47.794253 932216 out.go:177] * [embed-certs-362122] minikube v1.33.1 on Ubuntu 20.04 (arm64)
I0717 20:13:47.796300 932216 out.go:177] - MINIKUBE_LOCATION=19283
I0717 20:13:47.796448 932216 notify.go:220] Checking for updates...
I0717 20:13:47.800302 932216 out.go:177] - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0717 20:13:47.802270 932216 out.go:177] - KUBECONFIG=/home/jenkins/minikube-integration/19283-709197/kubeconfig
I0717 20:13:47.804095 932216 out.go:177] - MINIKUBE_HOME=/home/jenkins/minikube-integration/19283-709197/.minikube
I0717 20:13:47.805906 932216 out.go:177] - MINIKUBE_BIN=out/minikube-linux-arm64
I0717 20:13:47.814992 932216 out.go:177] - MINIKUBE_FORCE_SYSTEMD=
I0717 20:13:47.817312 932216 config.go:182] Loaded profile config "old-k8s-version-706521": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
I0717 20:13:47.817414 932216 driver.go:392] Setting default libvirt URI to qemu:///system
I0717 20:13:47.856529 932216 docker.go:123] docker version: linux-27.0.3:Docker Engine - Community
I0717 20:13:47.856642 932216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0717 20:13:47.992568 932216 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-17 20:13:47.9812905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
I0717 20:13:47.992683 932216 docker.go:307] overlay module found
I0717 20:13:47.994972 932216 out.go:177] * Using the docker driver based on user configuration
I0717 20:13:47.996898 932216 start.go:297] selected driver: docker
I0717 20:13:47.996971 932216 start.go:901] validating driver "docker" against <nil>
I0717 20:13:47.996988 932216 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0717 20:13:47.997616 932216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0717 20:13:48.075163 932216 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-17 20:13:48.064961752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1064-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1]] Warnings:<nil>}}
I0717 20:13:48.075356 932216 start_flags.go:310] no existing cluster config was found, will generate one from the flags
I0717 20:13:48.075641 932216 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0717 20:13:48.078212 932216 out.go:177] * Using Docker driver with root privileges
I0717 20:13:48.080320 932216 cni.go:84] Creating CNI manager for ""
I0717 20:13:48.080344 932216 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:13:48.080355 932216 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
I0717 20:13:48.080492 932216 start.go:340] cluster config:
{Name:embed-certs-362122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-362122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:13:48.084239 932216 out.go:177] * Starting "embed-certs-362122" primary control-plane node in "embed-certs-362122" cluster
I0717 20:13:48.086671 932216 cache.go:121] Beginning downloading kic base image for docker with containerd
I0717 20:13:48.089942 932216 out.go:177] * Pulling base image v0.0.44-1721146479-19264 ...
I0717 20:13:48.092399 932216 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 20:13:48.092491 932216 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19283-709197/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4
I0717 20:13:48.092506 932216 cache.go:56] Caching tarball of preloaded images
I0717 20:13:48.092528 932216 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local docker daemon
I0717 20:13:48.092632 932216 preload.go:172] Found /home/jenkins/minikube-integration/19283-709197/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
I0717 20:13:48.092645 932216 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on containerd
I0717 20:13:48.092778 932216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/config.json ...
I0717 20:13:48.092808 932216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/config.json: {Name:mk4e2f650f4b7f3d3e6ca89957db06a66715b89f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
W0717 20:13:48.112969 932216 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e is of wrong architecture
I0717 20:13:48.112990 932216 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e to local cache
I0717 20:13:48.113080 932216 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory
I0717 20:13:48.113108 932216 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e in local cache directory, skipping pull
I0717 20:13:48.113114 932216 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e exists in cache, skipping pull
I0717 20:13:48.113122 932216 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e as a tarball
I0717 20:13:48.113131 932216 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from local cache
I0717 20:13:48.252360 932216 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e from cached tarball
I0717 20:13:48.252404 932216 cache.go:194] Successfully downloaded all kic artifacts
I0717 20:13:48.252457 932216 start.go:360] acquireMachinesLock for embed-certs-362122: {Name:mk5e69d5609c7e9c832e9dbf5c8fbbb9edbf04e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0717 20:13:48.253379 932216 start.go:364] duration metric: took 894.412µs to acquireMachinesLock for "embed-certs-362122"
I0717 20:13:48.253419 932216 start.go:93] Provisioning new machine with config: &{Name:embed-certs-362122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-362122 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}
I0717 20:13:48.253509 932216 start.go:125] createHost starting for "" (driver="docker")
I0717 20:13:44.521460 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:47.019706 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:48.256534 932216 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
I0717 20:13:48.256807 932216 start.go:159] libmachine.API.Create for "embed-certs-362122" (driver="docker")
I0717 20:13:48.256845 932216 client.go:168] LocalClient.Create starting
I0717 20:13:48.256937 932216 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem
I0717 20:13:48.256973 932216 main.go:141] libmachine: Decoding PEM data...
I0717 20:13:48.256995 932216 main.go:141] libmachine: Parsing certificate...
I0717 20:13:48.257061 932216 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19283-709197/.minikube/certs/cert.pem
I0717 20:13:48.257084 932216 main.go:141] libmachine: Decoding PEM data...
I0717 20:13:48.257097 932216 main.go:141] libmachine: Parsing certificate...
I0717 20:13:48.257487 932216 cli_runner.go:164] Run: docker network inspect embed-certs-362122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0717 20:13:48.272886 932216 cli_runner.go:211] docker network inspect embed-certs-362122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0717 20:13:48.272970 932216 network_create.go:284] running [docker network inspect embed-certs-362122] to gather additional debugging logs...
I0717 20:13:48.273003 932216 cli_runner.go:164] Run: docker network inspect embed-certs-362122
W0717 20:13:48.288388 932216 cli_runner.go:211] docker network inspect embed-certs-362122 returned with exit code 1
I0717 20:13:48.288423 932216 network_create.go:287] error running [docker network inspect embed-certs-362122]: docker network inspect embed-certs-362122: exit status 1
stdout:
[]
stderr:
Error response from daemon: network embed-certs-362122 not found
I0717 20:13:48.288437 932216 network_create.go:289] output of [docker network inspect embed-certs-362122]: -- stdout --
[]
-- /stdout --
** stderr **
Error response from daemon: network embed-certs-362122 not found
** /stderr **
I0717 20:13:48.288550 932216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0717 20:13:48.305142 932216 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-5691bd5cc24f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:2c:3f:ee:1f} reservation:<nil>}
I0717 20:13:48.305725 932216 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9aa9931a1395 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:bb:58:88:93} reservation:<nil>}
I0717 20:13:48.306156 932216 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fd4186492bfb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:a2:51:82:75} reservation:<nil>}
I0717 20:13:48.306517 932216 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-339e81622c20 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:db:41:01:13} reservation:<nil>}
I0717 20:13:48.307045 932216 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001869020}
I0717 20:13:48.307079 932216 network_create.go:124] attempt to create docker network embed-certs-362122 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
I0717 20:13:48.307136 932216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-362122 embed-certs-362122
I0717 20:13:48.384788 932216 network_create.go:108] docker network embed-certs-362122 192.168.85.0/24 created
I0717 20:13:48.384843 932216 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-362122" container
I0717 20:13:48.384971 932216 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0717 20:13:48.403663 932216 cli_runner.go:164] Run: docker volume create embed-certs-362122 --label name.minikube.sigs.k8s.io=embed-certs-362122 --label created_by.minikube.sigs.k8s.io=true
I0717 20:13:48.421409 932216 oci.go:103] Successfully created a docker volume embed-certs-362122
I0717 20:13:48.421499 932216 cli_runner.go:164] Run: docker run --rm --name embed-certs-362122-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-362122 --entrypoint /usr/bin/test -v embed-certs-362122:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -d /var/lib
I0717 20:13:49.081151 932216 oci.go:107] Successfully prepared a docker volume embed-certs-362122
I0717 20:13:49.081205 932216 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 20:13:49.081227 932216 kic.go:194] Starting extracting preloaded images to volume ...
I0717 20:13:49.081330 932216 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19283-709197/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-362122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir
I0717 20:13:49.518424 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:52.018575 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:54.710381 932216 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19283-709197/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-362122:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e -I lz4 -xf /preloaded.tar -C /extractDir: (5.629011028s)
I0717 20:13:54.710414 932216 kic.go:203] duration metric: took 5.629183601s to extract preloaded images to volume ...
W0717 20:13:54.710558 932216 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I0717 20:13:54.710675 932216 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
I0717 20:13:54.764216 932216 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-362122 --name embed-certs-362122 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-362122 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-362122 --network embed-certs-362122 --ip 192.168.85.2 --volume embed-certs-362122:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e
I0717 20:13:55.122146 932216 cli_runner.go:164] Run: docker container inspect embed-certs-362122 --format={{.State.Running}}
I0717 20:13:55.146547 932216 cli_runner.go:164] Run: docker container inspect embed-certs-362122 --format={{.State.Status}}
I0717 20:13:55.176422 932216 cli_runner.go:164] Run: docker exec embed-certs-362122 stat /var/lib/dpkg/alternatives/iptables
I0717 20:13:55.246942 932216 oci.go:144] the created container "embed-certs-362122" has a running status.
I0717 20:13:55.246967 932216 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19283-709197/.minikube/machines/embed-certs-362122/id_rsa...
I0717 20:13:55.626510 932216 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19283-709197/.minikube/machines/embed-certs-362122/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0717 20:13:55.661626 932216 cli_runner.go:164] Run: docker container inspect embed-certs-362122 --format={{.State.Status}}
I0717 20:13:55.682114 932216 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0717 20:13:55.682134 932216 kic_runner.go:114] Args: [docker exec --privileged embed-certs-362122 chown docker:docker /home/docker/.ssh/authorized_keys]
I0717 20:13:55.743753 932216 cli_runner.go:164] Run: docker container inspect embed-certs-362122 --format={{.State.Status}}
I0717 20:13:55.762088 932216 machine.go:94] provisionDockerMachine start ...
I0717 20:13:55.762175 932216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362122
I0717 20:13:55.795993 932216 main.go:141] libmachine: Using SSH client type: native
I0717 20:13:55.796282 932216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33834 <nil> <nil>}
I0717 20:13:55.796292 932216 main.go:141] libmachine: About to run SSH command:
hostname
I0717 20:13:55.797055 932216 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39302->127.0.0.1:33834: read: connection reset by peer
I0717 20:13:54.019770 921861 pod_ready.go:102] pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace has status "Ready":"False"
I0717 20:13:55.026998 921861 pod_ready.go:81] duration metric: took 4m0.0153977s for pod "metrics-server-9975d5f86-xmdkg" in "kube-system" namespace to be "Ready" ...
E0717 20:13:55.027023 921861 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
I0717 20:13:55.027033 921861 pod_ready.go:38] duration metric: took 5m25.908864881s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0717 20:13:55.027259 921861 api_server.go:52] waiting for apiserver process to appear ...
I0717 20:13:55.027292 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0717 20:13:55.027364 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0717 20:13:55.096172 921861 cri.go:89] found id: "75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a"
I0717 20:13:55.096196 921861 cri.go:89] found id: "f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:13:55.096202 921861 cri.go:89] found id: ""
I0717 20:13:55.096209 921861 logs.go:276] 2 containers: [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc]
I0717 20:13:55.096268 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.106201 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.113900 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0717 20:13:55.114023 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0717 20:13:55.204482 921861 cri.go:89] found id: "ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422"
I0717 20:13:55.204501 921861 cri.go:89] found id: "ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:13:55.204506 921861 cri.go:89] found id: ""
I0717 20:13:55.204514 921861 logs.go:276] 2 containers: [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422 ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151]
I0717 20:13:55.204571 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.214697 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.219565 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0717 20:13:55.219649 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0717 20:13:55.303542 921861 cri.go:89] found id: "2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd"
I0717 20:13:55.303570 921861 cri.go:89] found id: "c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:13:55.303577 921861 cri.go:89] found id: ""
I0717 20:13:55.303592 921861 logs.go:276] 2 containers: [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549]
I0717 20:13:55.303649 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.310040 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.314477 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0717 20:13:55.314561 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0717 20:13:55.383864 921861 cri.go:89] found id: "0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3"
I0717 20:13:55.383897 921861 cri.go:89] found id: "9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:13:55.383902 921861 cri.go:89] found id: ""
I0717 20:13:55.383910 921861 logs.go:276] 2 containers: [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3 9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697]
I0717 20:13:55.383976 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.390821 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.402017 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0717 20:13:55.402115 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0717 20:13:55.486436 921861 cri.go:89] found id: "66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e"
I0717 20:13:55.486459 921861 cri.go:89] found id: "6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:13:55.486473 921861 cri.go:89] found id: ""
I0717 20:13:55.486481 921861 logs.go:276] 2 containers: [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e 6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15]
I0717 20:13:55.486564 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.494243 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.504164 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0717 20:13:55.504245 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0717 20:13:55.582028 921861 cri.go:89] found id: "54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2"
I0717 20:13:55.582076 921861 cri.go:89] found id: "ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:13:55.582082 921861 cri.go:89] found id: ""
I0717 20:13:55.582093 921861 logs.go:276] 2 containers: [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2 ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3]
I0717 20:13:55.582167 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.591513 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.596286 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0717 20:13:55.596367 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0717 20:13:55.665385 921861 cri.go:89] found id: "6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead"
I0717 20:13:55.665404 921861 cri.go:89] found id: "c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:13:55.665410 921861 cri.go:89] found id: ""
I0717 20:13:55.665417 921861 logs.go:276] 2 containers: [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318]
I0717 20:13:55.665488 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.674833 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.682477 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0717 20:13:55.682546 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0717 20:13:55.793123 921861 cri.go:89] found id: "c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d"
I0717 20:13:55.793144 921861 cri.go:89] found id: "d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad"
I0717 20:13:55.793149 921861 cri.go:89] found id: ""
I0717 20:13:55.793157 921861 logs.go:276] 2 containers: [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad]
I0717 20:13:55.793223 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.798047 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.802770 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0717 20:13:55.802958 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0717 20:13:55.902238 921861 cri.go:89] found id: "53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71"
I0717 20:13:55.902267 921861 cri.go:89] found id: ""
I0717 20:13:55.902286 921861 logs.go:276] 1 containers: [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71]
I0717 20:13:55.902361 921861 ssh_runner.go:195] Run: which crictl
I0717 20:13:55.907275 921861 logs.go:123] Gathering logs for kube-proxy [6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15] ...
I0717 20:13:55.907306 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:13:55.974153 921861 logs.go:123] Gathering logs for kube-controller-manager [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2] ...
I0717 20:13:55.974188 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2"
I0717 20:13:56.194545 921861 logs.go:123] Gathering logs for kindnet [c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318] ...
I0717 20:13:56.194586 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:13:56.324440 921861 logs.go:123] Gathering logs for storage-provisioner [d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad] ...
I0717 20:13:56.324519 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad"
I0717 20:13:56.374460 921861 logs.go:123] Gathering logs for kubernetes-dashboard [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71] ...
I0717 20:13:56.374485 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71"
I0717 20:13:56.429747 921861 logs.go:123] Gathering logs for kubelet ...
I0717 20:13:56.429819 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
W0717 20:13:56.486016 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:30 old-k8s-version-706521 kubelet[662]: E0717 20:08:30.661470 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.486228 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:30 old-k8s-version-706521 kubelet[662]: E0717 20:08:30.828892 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.489701 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:46 old-k8s-version-706521 kubelet[662]: E0717 20:08:46.565471 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.490071 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:47 old-k8s-version-706521 kubelet[662]: E0717 20:08:47.951245 662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-97vqh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-97vqh" is forbidden: User "system:node:old-k8s-version-706521" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-706521' and this object
W0717 20:13:56.492040 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:59 old-k8s-version-706521 kubelet[662]: E0717 20:08:59.963531 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.492517 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:00 old-k8s-version-706521 kubelet[662]: E0717 20:09:00.968809 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.492710 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:01 old-k8s-version-706521 kubelet[662]: E0717 20:09:01.552815 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.493508 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:02 old-k8s-version-706521 kubelet[662]: E0717 20:09:02.015089 662 pod_workers.go:191] Error syncing pod acbb1d8e-4bf9-4590-b17d-5cb03849d6a4 ("storage-provisioner_kube-system(acbb1d8e-4bf9-4590-b17d-5cb03849d6a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(acbb1d8e-4bf9-4590-b17d-5cb03849d6a4)"
W0717 20:13:56.493842 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:04 old-k8s-version-706521 kubelet[662]: E0717 20:09:04.098038 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.496729 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:15 old-k8s-version-706521 kubelet[662]: E0717 20:09:15.561998 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.497465 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:19 old-k8s-version-706521 kubelet[662]: E0717 20:09:19.118357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.497801 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:24 old-k8s-version-706521 kubelet[662]: E0717 20:09:24.098214 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.497989 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:26 old-k8s-version-706521 kubelet[662]: E0717 20:09:26.558472 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.498326 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:36 old-k8s-version-706521 kubelet[662]: E0717 20:09:36.552738 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.498514 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:37 old-k8s-version-706521 kubelet[662]: E0717 20:09:37.562445 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.498854 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:49 old-k8s-version-706521 kubelet[662]: E0717 20:09:49.559705 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.499351 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:50 old-k8s-version-706521 kubelet[662]: E0717 20:09:50.204203 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.499701 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:54 old-k8s-version-706521 kubelet[662]: E0717 20:09:54.098178 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.502351 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:01 old-k8s-version-706521 kubelet[662]: E0717 20:10:01.562475 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.502703 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:06 old-k8s-version-706521 kubelet[662]: E0717 20:10:06.556299 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.502910 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:15 old-k8s-version-706521 kubelet[662]: E0717 20:10:15.552476 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.503246 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:18 old-k8s-version-706521 kubelet[662]: E0717 20:10:18.552987 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.503663 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:26 old-k8s-version-706521 kubelet[662]: E0717 20:10:26.552937 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.504022 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:29 old-k8s-version-706521 kubelet[662]: E0717 20:10:29.552164 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.504216 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:38 old-k8s-version-706521 kubelet[662]: E0717 20:10:38.554109 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.504923 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:42 old-k8s-version-706521 kubelet[662]: E0717 20:10:42.348760 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.505274 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:44 old-k8s-version-706521 kubelet[662]: E0717 20:10:44.097722 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.505483 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:51 old-k8s-version-706521 kubelet[662]: E0717 20:10:51.552421 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.505837 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:59 old-k8s-version-706521 kubelet[662]: E0717 20:10:59.552527 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.506029 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:03 old-k8s-version-706521 kubelet[662]: E0717 20:11:03.552379 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.506365 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:13 old-k8s-version-706521 kubelet[662]: E0717 20:11:13.552539 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.506564 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:17 old-k8s-version-706521 kubelet[662]: E0717 20:11:17.552652 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.506935 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:25 old-k8s-version-706521 kubelet[662]: E0717 20:11:25.552153 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.509562 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:32 old-k8s-version-706521 kubelet[662]: E0717 20:11:32.561515 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:13:56.509932 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:38 old-k8s-version-706521 kubelet[662]: E0717 20:11:38.552601 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.510125 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:43 old-k8s-version-706521 kubelet[662]: E0717 20:11:43.552468 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.510466 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:49 old-k8s-version-706521 kubelet[662]: E0717 20:11:49.552132 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.510654 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:58 old-k8s-version-706521 kubelet[662]: E0717 20:11:58.552478 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.511368 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:04 old-k8s-version-706521 kubelet[662]: E0717 20:12:04.568005 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.511566 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:12 old-k8s-version-706521 kubelet[662]: E0717 20:12:12.552438 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.511917 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:14 old-k8s-version-706521 kubelet[662]: E0717 20:12:14.098192 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.512263 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:25 old-k8s-version-706521 kubelet[662]: E0717 20:12:25.552632 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.512455 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:25 old-k8s-version-706521 kubelet[662]: E0717 20:12:25.552990 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.512646 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:36 old-k8s-version-706521 kubelet[662]: E0717 20:12:36.552671 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.513015 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:36 old-k8s-version-706521 kubelet[662]: E0717 20:12:36.553533 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.513239 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:47 old-k8s-version-706521 kubelet[662]: E0717 20:12:47.552573 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.513583 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:48 old-k8s-version-706521 kubelet[662]: E0717 20:12:48.552271 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.513775 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:01 old-k8s-version-706521 kubelet[662]: E0717 20:13:01.552554 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.514171 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:03 old-k8s-version-706521 kubelet[662]: E0717 20:13:03.552206 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.514363 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:14 old-k8s-version-706521 kubelet[662]: E0717 20:13:14.555362 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.514705 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:18 old-k8s-version-706521 kubelet[662]: E0717 20:13:18.552316 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.514915 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:25 old-k8s-version-706521 kubelet[662]: E0717 20:13:25.554625 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.515257 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: E0717 20:13:32.553357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.515445 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:56.515818 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:56.516033 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0717 20:13:56.516083 921861 logs.go:123] Gathering logs for kube-apiserver [f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc] ...
I0717 20:13:56.516105 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:13:56.603364 921861 logs.go:123] Gathering logs for coredns [c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549] ...
I0717 20:13:56.603396 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:13:56.650145 921861 logs.go:123] Gathering logs for kindnet [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead] ...
I0717 20:13:56.650174 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead"
I0717 20:13:56.737339 921861 logs.go:123] Gathering logs for storage-provisioner [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d] ...
I0717 20:13:56.737380 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d"
I0717 20:13:56.783307 921861 logs.go:123] Gathering logs for containerd ...
I0717 20:13:56.783337 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0717 20:13:56.852823 921861 logs.go:123] Gathering logs for coredns [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd] ...
I0717 20:13:56.852908 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd"
I0717 20:13:56.903677 921861 logs.go:123] Gathering logs for kube-scheduler [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3] ...
I0717 20:13:56.903750 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3"
I0717 20:13:56.960337 921861 logs.go:123] Gathering logs for kube-controller-manager [ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3] ...
I0717 20:13:56.960406 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:13:57.050649 921861 logs.go:123] Gathering logs for dmesg ...
I0717 20:13:57.050686 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0717 20:13:57.072456 921861 logs.go:123] Gathering logs for kube-apiserver [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a] ...
I0717 20:13:57.072498 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a"
I0717 20:13:57.139790 921861 logs.go:123] Gathering logs for kube-scheduler [9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697] ...
I0717 20:13:57.139825 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:13:57.186897 921861 logs.go:123] Gathering logs for kube-proxy [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e] ...
I0717 20:13:57.186927 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e"
I0717 20:13:57.226201 921861 logs.go:123] Gathering logs for container status ...
I0717 20:13:57.226231 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0717 20:13:57.269422 921861 logs.go:123] Gathering logs for describe nodes ...
I0717 20:13:57.269452 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0717 20:13:57.411433 921861 logs.go:123] Gathering logs for etcd [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422] ...
I0717 20:13:57.411466 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422"
I0717 20:13:57.452643 921861 logs.go:123] Gathering logs for etcd [ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151] ...
I0717 20:13:57.452674 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:13:57.491246 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:13:57.491275 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0717 20:13:57.491328 921861 out.go:239] X Problems detected in kubelet:
W0717 20:13:57.491342 921861 out.go:239] Jul 17 20:13:25 old-k8s-version-706521 kubelet[662]: E0717 20:13:25.554625 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:57.491352 921861 out.go:239] Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: E0717 20:13:32.553357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:57.491364 921861 out.go:239] Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:13:57.491372 921861 out.go:239] Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:13:57.491382 921861 out.go:239] Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0717 20:13:57.491388 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:13:57.491401 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:13:58.934459 932216 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-362122
I0717 20:13:58.934485 932216 ubuntu.go:169] provisioning hostname "embed-certs-362122"
I0717 20:13:58.934555 932216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362122
I0717 20:13:58.951512 932216 main.go:141] libmachine: Using SSH client type: native
I0717 20:13:58.951832 932216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33834 <nil> <nil>}
I0717 20:13:58.951851 932216 main.go:141] libmachine: About to run SSH command:
sudo hostname embed-certs-362122 && echo "embed-certs-362122" | sudo tee /etc/hostname
I0717 20:13:59.106420 932216 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-362122
I0717 20:13:59.106550 932216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362122
I0717 20:13:59.123867 932216 main.go:141] libmachine: Using SSH client type: native
I0717 20:13:59.124113 932216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2c70] 0x3e54d0 <nil> [] 0s} 127.0.0.1 33834 <nil> <nil>}
I0717 20:13:59.124136 932216 main.go:141] libmachine: About to run SSH command:
if ! grep -xq '.*\sembed-certs-362122' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-362122/g' /etc/hosts;
else
echo '127.0.1.1 embed-certs-362122' | sudo tee -a /etc/hosts;
fi
fi
I0717 20:13:59.267059 932216 main.go:141] libmachine: SSH cmd err, output: <nil>:
I0717 20:13:59.267149 932216 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19283-709197/.minikube CaCertPath:/home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19283-709197/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19283-709197/.minikube}
I0717 20:13:59.267204 932216 ubuntu.go:177] setting up certificates
I0717 20:13:59.267235 932216 provision.go:84] configureAuth start
I0717 20:13:59.267316 932216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-362122
I0717 20:13:59.284585 932216 provision.go:143] copyHostCerts
I0717 20:13:59.284653 932216 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-709197/.minikube/ca.pem, removing ...
I0717 20:13:59.284668 932216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-709197/.minikube/ca.pem
I0717 20:13:59.284749 932216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19283-709197/.minikube/ca.pem (1078 bytes)
I0717 20:13:59.284848 932216 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-709197/.minikube/cert.pem, removing ...
I0717 20:13:59.284858 932216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-709197/.minikube/cert.pem
I0717 20:13:59.284887 932216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19283-709197/.minikube/cert.pem (1123 bytes)
I0717 20:13:59.284946 932216 exec_runner.go:144] found /home/jenkins/minikube-integration/19283-709197/.minikube/key.pem, removing ...
I0717 20:13:59.284955 932216 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19283-709197/.minikube/key.pem
I0717 20:13:59.284984 932216 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19283-709197/.minikube/key.pem (1675 bytes)
I0717 20:13:59.285039 932216 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19283-709197/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca-key.pem org=jenkins.embed-certs-362122 san=[127.0.0.1 192.168.85.2 embed-certs-362122 localhost minikube]
I0717 20:14:01.030088 932216 provision.go:177] copyRemoteCerts
I0717 20:14:01.030158 932216 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0717 20:14:01.030204 932216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362122
I0717 20:14:01.058578 932216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/embed-certs-362122/id_rsa Username:docker}
I0717 20:14:01.162297 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I0717 20:14:01.201235 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0717 20:14:01.232887 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
I0717 20:14:01.269338 932216 provision.go:87] duration metric: took 2.002073091s to configureAuth
I0717 20:14:01.269372 932216 ubuntu.go:193] setting minikube options for container-runtime
I0717 20:14:01.269579 932216 config.go:182] Loaded profile config "embed-certs-362122": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0717 20:14:01.269595 932216 machine.go:97] duration metric: took 5.507491981s to provisionDockerMachine
I0717 20:14:01.269602 932216 client.go:171] duration metric: took 13.012747135s to LocalClient.Create
I0717 20:14:01.269620 932216 start.go:167] duration metric: took 13.012815189s to libmachine.API.Create "embed-certs-362122"
I0717 20:14:01.269632 932216 start.go:293] postStartSetup for "embed-certs-362122" (driver="docker")
I0717 20:14:01.269642 932216 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0717 20:14:01.269701 932216 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0717 20:14:01.269746 932216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362122
I0717 20:14:01.287648 932216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/embed-certs-362122/id_rsa Username:docker}
I0717 20:14:01.389251 932216 ssh_runner.go:195] Run: cat /etc/os-release
I0717 20:14:01.392536 932216 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0717 20:14:01.392574 932216 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0717 20:14:01.392602 932216 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0717 20:14:01.392615 932216 info.go:137] Remote host: Ubuntu 22.04.4 LTS
I0717 20:14:01.392626 932216 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-709197/.minikube/addons for local assets ...
I0717 20:14:01.392713 932216 filesync.go:126] Scanning /home/jenkins/minikube-integration/19283-709197/.minikube/files for local assets ...
I0717 20:14:01.392798 932216 filesync.go:149] local asset: /home/jenkins/minikube-integration/19283-709197/.minikube/files/etc/ssl/certs/7145882.pem -> 7145882.pem in /etc/ssl/certs
I0717 20:14:01.392911 932216 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0717 20:14:01.403831 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/files/etc/ssl/certs/7145882.pem --> /etc/ssl/certs/7145882.pem (1708 bytes)
I0717 20:14:01.430972 932216 start.go:296] duration metric: took 161.325977ms for postStartSetup
I0717 20:14:01.431353 932216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-362122
I0717 20:14:01.448525 932216 profile.go:143] Saving config to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/config.json ...
I0717 20:14:01.448819 932216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0717 20:14:01.448861 932216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362122
I0717 20:14:01.466888 932216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/embed-certs-362122/id_rsa Username:docker}
I0717 20:14:01.568614 932216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0717 20:14:01.576401 932216 start.go:128] duration metric: took 13.322875661s to createHost
I0717 20:14:01.576428 932216 start.go:83] releasing machines lock for "embed-certs-362122", held for 13.323030962s
I0717 20:14:01.576527 932216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-362122
I0717 20:14:01.596673 932216 ssh_runner.go:195] Run: cat /version.json
I0717 20:14:01.596880 932216 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0717 20:14:01.596958 932216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362122
I0717 20:14:01.597582 932216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-362122
I0717 20:14:01.620512 932216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/embed-certs-362122/id_rsa Username:docker}
I0717 20:14:01.625802 932216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19283-709197/.minikube/machines/embed-certs-362122/id_rsa Username:docker}
I0717 20:14:01.718897 932216 ssh_runner.go:195] Run: systemctl --version
I0717 20:14:01.856863 932216 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
I0717 20:14:01.861453 932216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
I0717 20:14:01.891886 932216 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
I0717 20:14:01.892020 932216 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0717 20:14:01.938010 932216 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
I0717 20:14:01.938051 932216 start.go:495] detecting cgroup driver to use...
I0717 20:14:01.938087 932216 detect.go:187] detected "cgroupfs" cgroup driver on host os
I0717 20:14:01.938158 932216 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0717 20:14:01.953464 932216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0717 20:14:01.968337 932216 docker.go:217] disabling cri-docker service (if available) ...
I0717 20:14:01.968445 932216 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
I0717 20:14:01.985050 932216 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
I0717 20:14:02.004577 932216 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
I0717 20:14:02.100376 932216 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
I0717 20:14:02.214837 932216 docker.go:233] disabling docker service ...
I0717 20:14:02.214948 932216 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
I0717 20:14:02.240090 932216 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
I0717 20:14:02.252031 932216 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
I0717 20:14:02.340237 932216 ssh_runner.go:195] Run: sudo systemctl mask docker.service
I0717 20:14:02.432942 932216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
I0717 20:14:02.445141 932216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0717 20:14:02.462884 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0717 20:14:02.473376 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0717 20:14:02.483857 932216 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0717 20:14:02.483981 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0717 20:14:02.494258 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 20:14:02.504871 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0717 20:14:02.515101 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0717 20:14:02.525337 932216 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0717 20:14:02.535753 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0717 20:14:02.545975 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0717 20:14:02.559984 932216 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0717 20:14:02.570572 932216 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0717 20:14:02.580099 932216 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0717 20:14:02.588891 932216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:14:02.672935 932216 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0717 20:14:02.819736 932216 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
I0717 20:14:02.819811 932216 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
I0717 20:14:02.823668 932216 start.go:563] Will wait 60s for crictl version
I0717 20:14:02.823739 932216 ssh_runner.go:195] Run: which crictl
I0717 20:14:02.827532 932216 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0717 20:14:02.873129 932216 start.go:579] Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: 1.7.18
RuntimeApiVersion: v1
I0717 20:14:02.873211 932216 ssh_runner.go:195] Run: containerd --version
I0717 20:14:02.897245 932216 ssh_runner.go:195] Run: containerd --version
I0717 20:14:02.928992 932216 out.go:177] * Preparing Kubernetes v1.30.2 on containerd 1.7.18 ...
I0717 20:14:02.931517 932216 cli_runner.go:164] Run: docker network inspect embed-certs-362122 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0717 20:14:02.947531 932216 ssh_runner.go:195] Run: grep 192.168.85.1 host.minikube.internal$ /etc/hosts
I0717 20:14:02.951321 932216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 20:14:02.962851 932216 kubeadm.go:883] updating cluster {Name:embed-certs-362122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-362122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
I0717 20:14:02.962979 932216 preload.go:131] Checking if preload exists for k8s version v1.30.2 and runtime containerd
I0717 20:14:02.963041 932216 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 20:14:03.004339 932216 containerd.go:627] all images are preloaded for containerd runtime.
I0717 20:14:03.004366 932216 containerd.go:534] Images already preloaded, skipping extraction
I0717 20:14:03.004434 932216 ssh_runner.go:195] Run: sudo crictl images --output json
I0717 20:14:03.044135 932216 containerd.go:627] all images are preloaded for containerd runtime.
I0717 20:14:03.044159 932216 cache_images.go:84] Images are preloaded, skipping loading
I0717 20:14:03.044167 932216 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.30.2 containerd true true} ...
I0717 20:14:03.044279 932216 kubeadm.go:946] kubelet [Unit]
Wants=containerd.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-362122 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
[Install]
config:
{KubernetesVersion:v1.30.2 ClusterName:embed-certs-362122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0717 20:14:03.044353 932216 ssh_runner.go:195] Run: sudo crictl info
I0717 20:14:03.086237 932216 cni.go:84] Creating CNI manager for ""
I0717 20:14:03.086258 932216 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
I0717 20:14:03.086267 932216 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
I0717 20:14:03.086292 932216 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-362122 NodeName:embed-certs-362122 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
I0717 20:14:03.086419 932216 kubeadm.go:187] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.85.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
name: "embed-certs-362122"
kubeletExtraArgs:
node-ip: 192.168.85.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.30.2
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
hairpinMode: hairpin-veth
runtimeRequestTimeout: 15m
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I0717 20:14:03.086491 932216 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
I0717 20:14:03.097107 932216 binaries.go:44] Found k8s binaries, skipping transfer
I0717 20:14:03.097183 932216 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0717 20:14:03.106656 932216 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
I0717 20:14:03.125703 932216 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0717 20:14:03.146388 932216 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
I0717 20:14:03.164702 932216 ssh_runner.go:195] Run: grep 192.168.85.2 control-plane.minikube.internal$ /etc/hosts
I0717 20:14:03.168326 932216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0717 20:14:03.180301 932216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0717 20:14:03.270295 932216 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0717 20:14:03.286898 932216 certs.go:68] Setting up /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122 for IP: 192.168.85.2
I0717 20:14:03.286925 932216 certs.go:194] generating shared ca certs ...
I0717 20:14:03.286943 932216 certs.go:226] acquiring lock for ca certs: {Name:mkfe19deb7be0c5238e120e88073153330750974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:14:03.287078 932216 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19283-709197/.minikube/ca.key
I0717 20:14:03.287131 932216 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19283-709197/.minikube/proxy-client-ca.key
I0717 20:14:03.287143 932216 certs.go:256] generating profile certs ...
I0717 20:14:03.287198 932216 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/client.key
I0717 20:14:03.287217 932216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/client.crt with IP's: []
I0717 20:14:03.874297 932216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/client.crt ...
I0717 20:14:03.874328 932216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/client.crt: {Name:mk9b93f17d39d61c8ce6960f1e589ace0b9425d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:14:03.874911 932216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/client.key ...
I0717 20:14:03.874931 932216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/client.key: {Name:mk61d201c55cf5c4903b97e6d817da7c7244a2c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:14:03.875446 932216 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.key.6788fc1b
I0717 20:14:03.875471 932216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.crt.6788fc1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
I0717 20:14:04.741145 932216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.crt.6788fc1b ...
I0717 20:14:04.741176 932216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.crt.6788fc1b: {Name:mk0977e3c43ae467e3696bcbbeaa46415bb8742b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:14:04.741386 932216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.key.6788fc1b ...
I0717 20:14:04.741407 932216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.key.6788fc1b: {Name:mk28fc0a71c7fed7b4294edef82866e304ac4b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:14:04.741484 932216 certs.go:381] copying /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.crt.6788fc1b -> /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.crt
I0717 20:14:04.741559 932216 certs.go:385] copying /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.key.6788fc1b -> /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.key
I0717 20:14:04.741633 932216 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/proxy-client.key
I0717 20:14:04.741654 932216 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/proxy-client.crt with IP's: []
I0717 20:14:05.368690 932216 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/proxy-client.crt ...
I0717 20:14:05.368723 932216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/proxy-client.crt: {Name:mkb6d2373191bd1d26cd2e695940660bc55c8624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:14:05.368913 932216 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/proxy-client.key ...
I0717 20:14:05.368930 932216 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/proxy-client.key: {Name:mk36199234389ac57d07fda7cf89cf85c094d952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0717 20:14:05.369132 932216 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/714588.pem (1338 bytes)
W0717 20:14:05.369175 932216 certs.go:480] ignoring /home/jenkins/minikube-integration/19283-709197/.minikube/certs/714588_empty.pem, impossibly tiny 0 bytes
I0717 20:14:05.369190 932216 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca-key.pem (1679 bytes)
I0717 20:14:05.369216 932216 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/ca.pem (1078 bytes)
I0717 20:14:05.369244 932216 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/cert.pem (1123 bytes)
I0717 20:14:05.369272 932216 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/certs/key.pem (1675 bytes)
I0717 20:14:05.369317 932216 certs.go:484] found cert: /home/jenkins/minikube-integration/19283-709197/.minikube/files/etc/ssl/certs/7145882.pem (1708 bytes)
I0717 20:14:05.369907 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0717 20:14:05.396292 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I0717 20:14:05.422755 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0717 20:14:05.447793 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I0717 20:14:05.473630 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
I0717 20:14:05.499113 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0717 20:14:05.524567 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0717 20:14:05.551734 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/profiles/embed-certs-362122/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
I0717 20:14:05.578863 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/certs/714588.pem --> /usr/share/ca-certificates/714588.pem (1338 bytes)
I0717 20:14:05.604239 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/files/etc/ssl/certs/7145882.pem --> /usr/share/ca-certificates/7145882.pem (1708 bytes)
I0717 20:14:05.629288 932216 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19283-709197/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0717 20:14:05.654314 932216 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I0717 20:14:05.673363 932216 ssh_runner.go:195] Run: openssl version
I0717 20:14:05.680250 932216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/714588.pem && ln -fs /usr/share/ca-certificates/714588.pem /etc/ssl/certs/714588.pem"
I0717 20:14:05.691105 932216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/714588.pem
I0717 20:14:05.696120 932216 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:26 /usr/share/ca-certificates/714588.pem
I0717 20:14:05.696200 932216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/714588.pem
I0717 20:14:05.704669 932216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/714588.pem /etc/ssl/certs/51391683.0"
I0717 20:14:05.716459 932216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7145882.pem && ln -fs /usr/share/ca-certificates/7145882.pem /etc/ssl/certs/7145882.pem"
I0717 20:14:05.728082 932216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7145882.pem
I0717 20:14:05.732427 932216 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:26 /usr/share/ca-certificates/7145882.pem
I0717 20:14:05.732510 932216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7145882.pem
I0717 20:14:05.740314 932216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7145882.pem /etc/ssl/certs/3ec20f2e.0"
I0717 20:14:05.750965 932216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0717 20:14:05.761130 932216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0717 20:14:05.765555 932216 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:17 /usr/share/ca-certificates/minikubeCA.pem
I0717 20:14:05.765628 932216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0717 20:14:05.773653 932216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0717 20:14:05.785009 932216 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0717 20:14:05.788632 932216 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
stdout:
stderr:
stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
I0717 20:14:05.788728 932216 kubeadm.go:392] StartCluster: {Name:embed-certs-362122 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721146479-19264@sha256:7ee06b7e8fb4a6c7fce11a567253ea7d43fed61ee0beca281a1ac2c2566a2a2e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:embed-certs-362122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0717 20:14:05.788824 932216 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
I0717 20:14:05.788882 932216 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I0717 20:14:05.827236 932216 cri.go:89] found id: ""
I0717 20:14:05.827349 932216 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0717 20:14:05.836527 932216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0717 20:14:05.845676 932216 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
I0717 20:14:05.845757 932216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0717 20:14:05.855429 932216 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0717 20:14:05.855504 932216 kubeadm.go:157] found existing configuration files:
I0717 20:14:05.855571 932216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
I0717 20:14:05.864661 932216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/admin.conf: No such file or directory
I0717 20:14:05.864745 932216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
I0717 20:14:05.873843 932216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0717 20:14:05.882891 932216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/kubelet.conf: No such file or directory
I0717 20:14:05.883009 932216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
I0717 20:14:05.892046 932216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0717 20:14:05.901439 932216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/controller-manager.conf: No such file or directory
I0717 20:14:05.901503 932216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0717 20:14:05.910958 932216 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0717 20:14:05.921141 932216 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
grep: /etc/kubernetes/scheduler.conf: No such file or directory
I0717 20:14:05.921204 932216 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0717 20:14:05.929879 932216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0717 20:14:05.976158 932216 kubeadm.go:310] [init] Using Kubernetes version: v1.30.2
I0717 20:14:05.976556 932216 kubeadm.go:310] [preflight] Running pre-flight checks
I0717 20:14:06.028853 932216 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
I0717 20:14:06.028939 932216 kubeadm.go:310] [0;37mKERNEL_VERSION[0m: [0;32m5.15.0-1064-aws[0m
I0717 20:14:06.029000 932216 kubeadm.go:310] [0;37mOS[0m: [0;32mLinux[0m
I0717 20:14:06.029061 932216 kubeadm.go:310] [0;37mCGROUPS_CPU[0m: [0;32menabled[0m
I0717 20:14:06.029122 932216 kubeadm.go:310] [0;37mCGROUPS_CPUACCT[0m: [0;32menabled[0m
I0717 20:14:06.029178 932216 kubeadm.go:310] [0;37mCGROUPS_CPUSET[0m: [0;32menabled[0m
I0717 20:14:06.029237 932216 kubeadm.go:310] [0;37mCGROUPS_DEVICES[0m: [0;32menabled[0m
I0717 20:14:06.029292 932216 kubeadm.go:310] [0;37mCGROUPS_FREEZER[0m: [0;32menabled[0m
I0717 20:14:06.029347 932216 kubeadm.go:310] [0;37mCGROUPS_MEMORY[0m: [0;32menabled[0m
I0717 20:14:06.029396 932216 kubeadm.go:310] [0;37mCGROUPS_PIDS[0m: [0;32menabled[0m
I0717 20:14:06.029449 932216 kubeadm.go:310] [0;37mCGROUPS_HUGETLB[0m: [0;32menabled[0m
I0717 20:14:06.029504 932216 kubeadm.go:310] [0;37mCGROUPS_BLKIO[0m: [0;32menabled[0m
I0717 20:14:06.102962 932216 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
I0717 20:14:06.103114 932216 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
I0717 20:14:06.103244 932216 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0717 20:14:06.344793 932216 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
I0717 20:14:06.349558 932216 out.go:204] - Generating certificates and keys ...
I0717 20:14:06.349739 932216 kubeadm.go:310] [certs] Using existing ca certificate authority
I0717 20:14:06.349853 932216 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
I0717 20:14:06.644847 932216 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
I0717 20:14:07.033087 932216 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
I0717 20:14:07.344318 932216 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
I0717 20:14:07.493042 921861 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0717 20:14:07.512178 921861 api_server.go:72] duration metric: took 5m54.662260002s to wait for apiserver process to appear ...
I0717 20:14:07.512202 921861 api_server.go:88] waiting for apiserver healthz status ...
I0717 20:14:07.512237 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
I0717 20:14:07.512289 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I0717 20:14:07.562714 921861 cri.go:89] found id: "75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a"
I0717 20:14:07.562742 921861 cri.go:89] found id: "f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:14:07.562748 921861 cri.go:89] found id: ""
I0717 20:14:07.562755 921861 logs.go:276] 2 containers: [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc]
I0717 20:14:07.562824 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.566741 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.570378 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
I0717 20:14:07.570441 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
I0717 20:14:07.617037 921861 cri.go:89] found id: "ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422"
I0717 20:14:07.617061 921861 cri.go:89] found id: "ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:14:07.617069 921861 cri.go:89] found id: ""
I0717 20:14:07.617077 921861 logs.go:276] 2 containers: [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422 ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151]
I0717 20:14:07.617132 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.621162 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.625269 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
I0717 20:14:07.625335 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
I0717 20:14:07.683294 921861 cri.go:89] found id: "2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd"
I0717 20:14:07.683313 921861 cri.go:89] found id: "c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:14:07.683318 921861 cri.go:89] found id: ""
I0717 20:14:07.683325 921861 logs.go:276] 2 containers: [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549]
I0717 20:14:07.683383 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.687487 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.691418 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
I0717 20:14:07.691484 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I0717 20:14:07.745936 921861 cri.go:89] found id: "0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3"
I0717 20:14:07.745956 921861 cri.go:89] found id: "9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:14:07.745961 921861 cri.go:89] found id: ""
I0717 20:14:07.745968 921861 logs.go:276] 2 containers: [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3 9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697]
I0717 20:14:07.746028 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.750020 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.753707 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
I0717 20:14:07.753770 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
I0717 20:14:07.806380 921861 cri.go:89] found id: "66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e"
I0717 20:14:07.806403 921861 cri.go:89] found id: "6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:14:07.806414 921861 cri.go:89] found id: ""
I0717 20:14:07.806421 921861 logs.go:276] 2 containers: [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e 6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15]
I0717 20:14:07.806474 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.810664 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.814569 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
I0717 20:14:07.814693 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I0717 20:14:07.880269 921861 cri.go:89] found id: "54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2"
I0717 20:14:07.880351 921861 cri.go:89] found id: "ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:14:07.880374 921861 cri.go:89] found id: ""
I0717 20:14:07.880411 921861 logs.go:276] 2 containers: [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2 ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3]
I0717 20:14:07.880497 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.884980 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.888721 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
I0717 20:14:07.888842 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
I0717 20:14:07.940015 921861 cri.go:89] found id: "6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead"
I0717 20:14:07.940090 921861 cri.go:89] found id: "c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:14:07.940110 921861 cri.go:89] found id: ""
I0717 20:14:07.940132 921861 logs.go:276] 2 containers: [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318]
I0717 20:14:07.940220 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.951143 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:07.954868 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
I0717 20:14:07.955006 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I0717 20:14:08.017613 921861 cri.go:89] found id: "53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71"
I0717 20:14:08.017686 921861 cri.go:89] found id: ""
I0717 20:14:08.017714 921861 logs.go:276] 1 containers: [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71]
I0717 20:14:08.017805 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:08.022399 921861 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
I0717 20:14:08.022533 921861 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I0717 20:14:08.099451 921861 cri.go:89] found id: "c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d"
I0717 20:14:08.099529 921861 cri.go:89] found id: "d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad"
I0717 20:14:08.099549 921861 cri.go:89] found id: ""
I0717 20:14:08.099572 921861 logs.go:276] 2 containers: [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad]
I0717 20:14:08.099683 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:08.104380 921861 ssh_runner.go:195] Run: which crictl
I0717 20:14:08.110423 921861 logs.go:123] Gathering logs for etcd [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422] ...
I0717 20:14:08.110447 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422"
I0717 20:14:08.177584 921861 logs.go:123] Gathering logs for kube-proxy [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e] ...
I0717 20:14:08.177658 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e"
I0717 20:14:08.227350 921861 logs.go:123] Gathering logs for kindnet [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead] ...
I0717 20:14:08.227427 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead"
I0717 20:14:08.343745 921861 logs.go:123] Gathering logs for kubernetes-dashboard [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71] ...
I0717 20:14:08.343780 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71"
I0717 20:14:08.413539 921861 logs.go:123] Gathering logs for storage-provisioner [d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad] ...
I0717 20:14:08.413570 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad"
I0717 20:14:08.461679 921861 logs.go:123] Gathering logs for container status ...
I0717 20:14:08.461713 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0717 20:14:08.543346 921861 logs.go:123] Gathering logs for kubelet ...
I0717 20:14:08.543377 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0717 20:14:08.352294 932216 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
I0717 20:14:09.339387 932216 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
I0717 20:14:09.339520 932216 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-362122 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0717 20:14:09.621636 932216 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
I0717 20:14:09.622282 932216 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-362122 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
I0717 20:14:10.185432 932216 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
I0717 20:14:10.509944 932216 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
I0717 20:14:10.808432 932216 kubeadm.go:310] [certs] Generating "sa" key and public key
I0717 20:14:10.808682 932216 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0717 20:14:11.036072 932216 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
I0717 20:14:12.463366 932216 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
W0717 20:14:08.621760 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:30 old-k8s-version-706521 kubelet[662]: E0717 20:08:30.661470 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.621988 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:30 old-k8s-version-706521 kubelet[662]: E0717 20:08:30.828892 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.625852 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:46 old-k8s-version-706521 kubelet[662]: E0717 20:08:46.565471 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.626261 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:47 old-k8s-version-706521 kubelet[662]: E0717 20:08:47.951245 662 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-97vqh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-97vqh" is forbidden: User "system:node:old-k8s-version-706521" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-706521' and this object
W0717 20:14:08.628352 921861 logs.go:138] Found kubelet problem: Jul 17 20:08:59 old-k8s-version-706521 kubelet[662]: E0717 20:08:59.963531 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.628854 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:00 old-k8s-version-706521 kubelet[662]: E0717 20:09:00.968809 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.629068 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:01 old-k8s-version-706521 kubelet[662]: E0717 20:09:01.552815 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.629971 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:02 old-k8s-version-706521 kubelet[662]: E0717 20:09:02.015089 662 pod_workers.go:191] Error syncing pod acbb1d8e-4bf9-4590-b17d-5cb03849d6a4 ("storage-provisioner_kube-system(acbb1d8e-4bf9-4590-b17d-5cb03849d6a4)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(acbb1d8e-4bf9-4590-b17d-5cb03849d6a4)"
W0717 20:14:08.630343 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:04 old-k8s-version-706521 kubelet[662]: E0717 20:09:04.098038 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.633441 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:15 old-k8s-version-706521 kubelet[662]: E0717 20:09:15.561998 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.634294 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:19 old-k8s-version-706521 kubelet[662]: E0717 20:09:19.118357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.634663 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:24 old-k8s-version-706521 kubelet[662]: E0717 20:09:24.098214 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.634890 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:26 old-k8s-version-706521 kubelet[662]: E0717 20:09:26.558472 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.635270 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:36 old-k8s-version-706521 kubelet[662]: E0717 20:09:36.552738 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.635558 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:37 old-k8s-version-706521 kubelet[662]: E0717 20:09:37.562445 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.635910 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:49 old-k8s-version-706521 kubelet[662]: E0717 20:09:49.559705 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.636414 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:50 old-k8s-version-706521 kubelet[662]: E0717 20:09:50.204203 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.636847 921861 logs.go:138] Found kubelet problem: Jul 17 20:09:54 old-k8s-version-706521 kubelet[662]: E0717 20:09:54.098178 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.639637 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:01 old-k8s-version-706521 kubelet[662]: E0717 20:10:01.562475 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.640283 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:06 old-k8s-version-706521 kubelet[662]: E0717 20:10:06.556299 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.640568 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:15 old-k8s-version-706521 kubelet[662]: E0717 20:10:15.552476 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.640939 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:18 old-k8s-version-706521 kubelet[662]: E0717 20:10:18.552987 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.641153 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:26 old-k8s-version-706521 kubelet[662]: E0717 20:10:26.552937 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.641509 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:29 old-k8s-version-706521 kubelet[662]: E0717 20:10:29.552164 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.641718 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:38 old-k8s-version-706521 kubelet[662]: E0717 20:10:38.554109 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.642392 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:42 old-k8s-version-706521 kubelet[662]: E0717 20:10:42.348760 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.642757 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:44 old-k8s-version-706521 kubelet[662]: E0717 20:10:44.097722 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.642994 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:51 old-k8s-version-706521 kubelet[662]: E0717 20:10:51.552421 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.643351 921861 logs.go:138] Found kubelet problem: Jul 17 20:10:59 old-k8s-version-706521 kubelet[662]: E0717 20:10:59.552527 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.643561 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:03 old-k8s-version-706521 kubelet[662]: E0717 20:11:03.552379 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.643997 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:13 old-k8s-version-706521 kubelet[662]: E0717 20:11:13.552539 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.644210 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:17 old-k8s-version-706521 kubelet[662]: E0717 20:11:17.552652 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.644563 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:25 old-k8s-version-706521 kubelet[662]: E0717 20:11:25.552153 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.647180 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:32 old-k8s-version-706521 kubelet[662]: E0717 20:11:32.561515 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
W0717 20:14:08.647557 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:38 old-k8s-version-706521 kubelet[662]: E0717 20:11:38.552601 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.649442 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:43 old-k8s-version-706521 kubelet[662]: E0717 20:11:43.552468 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.649834 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:49 old-k8s-version-706521 kubelet[662]: E0717 20:11:49.552132 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.650047 921861 logs.go:138] Found kubelet problem: Jul 17 20:11:58 old-k8s-version-706521 kubelet[662]: E0717 20:11:58.552478 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.650670 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:04 old-k8s-version-706521 kubelet[662]: E0717 20:12:04.568005 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.650890 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:12 old-k8s-version-706521 kubelet[662]: E0717 20:12:12.552438 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.651245 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:14 old-k8s-version-706521 kubelet[662]: E0717 20:12:14.098192 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.651620 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:25 old-k8s-version-706521 kubelet[662]: E0717 20:12:25.552632 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.651840 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:25 old-k8s-version-706521 kubelet[662]: E0717 20:12:25.552990 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.652052 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:36 old-k8s-version-706521 kubelet[662]: E0717 20:12:36.552671 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.652453 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:36 old-k8s-version-706521 kubelet[662]: E0717 20:12:36.553533 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.652686 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:47 old-k8s-version-706521 kubelet[662]: E0717 20:12:47.552573 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.653076 921861 logs.go:138] Found kubelet problem: Jul 17 20:12:48 old-k8s-version-706521 kubelet[662]: E0717 20:12:48.552271 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.653339 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:01 old-k8s-version-706521 kubelet[662]: E0717 20:13:01.552554 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.653753 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:03 old-k8s-version-706521 kubelet[662]: E0717 20:13:03.552206 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.653966 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:14 old-k8s-version-706521 kubelet[662]: E0717 20:13:14.555362 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.654323 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:18 old-k8s-version-706521 kubelet[662]: E0717 20:13:18.552316 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.654532 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:25 old-k8s-version-706521 kubelet[662]: E0717 20:13:25.554625 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.654898 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: E0717 20:13:32.553357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.655106 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.655482 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.655700 921861 logs.go:138] Found kubelet problem: Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:08.656099 921861 logs.go:138] Found kubelet problem: Jul 17 20:14:01 old-k8s-version-706521 kubelet[662]: E0717 20:14:01.552159 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:08.656322 921861 logs.go:138] Found kubelet problem: Jul 17 20:14:02 old-k8s-version-706521 kubelet[662]: E0717 20:14:02.552503 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0717 20:14:08.656338 921861 logs.go:123] Gathering logs for describe nodes ...
I0717 20:14:08.656364 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0717 20:14:08.845780 921861 logs.go:123] Gathering logs for kube-apiserver [f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc] ...
I0717 20:14:08.845821 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc"
I0717 20:14:08.943640 921861 logs.go:123] Gathering logs for etcd [ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151] ...
I0717 20:14:08.943689 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151"
I0717 20:14:09.004417 921861 logs.go:123] Gathering logs for kube-proxy [6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15] ...
I0717 20:14:09.004455 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15"
I0717 20:14:09.064477 921861 logs.go:123] Gathering logs for kube-controller-manager [ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3] ...
I0717 20:14:09.064509 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3"
I0717 20:14:09.180668 921861 logs.go:123] Gathering logs for containerd ...
I0717 20:14:09.180709 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
I0717 20:14:09.249534 921861 logs.go:123] Gathering logs for dmesg ...
I0717 20:14:09.249575 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0717 20:14:09.271241 921861 logs.go:123] Gathering logs for kube-apiserver [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a] ...
I0717 20:14:09.271272 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a"
I0717 20:14:09.344317 921861 logs.go:123] Gathering logs for coredns [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd] ...
I0717 20:14:09.344376 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd"
I0717 20:14:09.403848 921861 logs.go:123] Gathering logs for kube-scheduler [9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697] ...
I0717 20:14:09.403882 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697"
I0717 20:14:09.461187 921861 logs.go:123] Gathering logs for kube-controller-manager [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2] ...
I0717 20:14:09.461221 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2"
I0717 20:14:09.556298 921861 logs.go:123] Gathering logs for kindnet [c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318] ...
I0717 20:14:09.556344 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318"
I0717 20:14:09.630480 921861 logs.go:123] Gathering logs for storage-provisioner [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d] ...
I0717 20:14:09.630516 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d"
I0717 20:14:09.689256 921861 logs.go:123] Gathering logs for coredns [c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549] ...
I0717 20:14:09.689287 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549"
I0717 20:14:09.743850 921861 logs.go:123] Gathering logs for kube-scheduler [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3] ...
I0717 20:14:09.743882 921861 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3"
I0717 20:14:09.795970 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:14:09.795994 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
W0717 20:14:09.796039 921861 out.go:239] X Problems detected in kubelet:
W0717 20:14:09.796055 921861 out.go:239] Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:09.796062 921861 out.go:239] Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:09.796078 921861 out.go:239] Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
W0717 20:14:09.796085 921861 out.go:239] Jul 17 20:14:01 old-k8s-version-706521 kubelet[662]: E0717 20:14:01.552159 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
W0717 20:14:09.796102 921861 out.go:239] Jul 17 20:14:02 old-k8s-version-706521 kubelet[662]: E0717 20:14:02.552503 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
I0717 20:14:09.796110 921861 out.go:304] Setting ErrFile to fd 2...
I0717 20:14:09.796119 921861 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0717 20:14:13.320195 932216 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
I0717 20:14:13.973240 932216 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0717 20:14:14.912811 932216 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
I0717 20:14:14.913476 932216 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0717 20:14:14.918254 932216 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I0717 20:14:14.921328 932216 out.go:204] - Booting up control plane ...
I0717 20:14:14.921466 932216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
I0717 20:14:14.921553 932216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
I0717 20:14:14.921633 932216 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
I0717 20:14:14.935266 932216 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I0717 20:14:14.935652 932216 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0717 20:14:14.935921 932216 kubeadm.go:310] [kubelet-start] Starting the kubelet
I0717 20:14:15.041007 932216 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
I0717 20:14:15.041109 932216 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
I0717 20:14:16.041892 932216 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001451069s
I0717 20:14:16.041992 932216 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
I0717 20:14:19.796705 921861 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
I0717 20:14:19.809529 921861 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
ok
I0717 20:14:19.811776 921861 out.go:177]
W0717 20:14:19.813922 921861 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
W0717 20:14:19.813962 921861 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
W0717 20:14:19.813979 921861 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
W0717 20:14:19.813985 921861 out.go:239] *
W0717 20:14:19.815148 921861 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
│ https://github.com/kubernetes/minikube/issues/new/choose │
│ │
│ * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0717 20:14:19.817411 921861 out.go:177]
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
525c1fa74d7e6 523cad1a4df73 2 minutes ago Exited dashboard-metrics-scraper 5 6e2a6d49d1a57 dashboard-metrics-scraper-8d5bb5db8-48n4d
c4a606723ca27 ba04bb24b9575 5 minutes ago Running storage-provisioner 2 7bc72da64af5c storage-provisioner
53c03d1b6817f 20b332c9a70d8 5 minutes ago Running kubernetes-dashboard 0 fced287be151a kubernetes-dashboard-cd95d586-xctlf
aeeb5f19148bc 1611cd07b61d5 5 minutes ago Running busybox 1 5585f0b89be6a busybox
d19fafa5a77f5 ba04bb24b9575 5 minutes ago Exited storage-provisioner 1 7bc72da64af5c storage-provisioner
2dacd0bd4b06a db91994f4ee8f 5 minutes ago Running coredns 1 6ba803de7eda2 coredns-74ff55c5b-92n49
66c38038a889a 25a5233254979 5 minutes ago Running kube-proxy 1 78294bd4defe4 kube-proxy-wl7dv
6176790881c6a 5e32961ddcea3 5 minutes ago Running kindnet-cni 1 4947c1f5027c5 kindnet-5497r
75e3d044b1369 2c08bbbc02d3a 6 minutes ago Running kube-apiserver 1 d127799a4f55c kube-apiserver-old-k8s-version-706521
ac470fb543e6c 05b738aa1bc63 6 minutes ago Running etcd 1 637379f34c9d2 etcd-old-k8s-version-706521
54ecf5505abec 1df8a2b116bd1 6 minutes ago Running kube-controller-manager 1 828d1d62b4178 kube-controller-manager-old-k8s-version-706521
0c9597e9f8bc3 e7605f88f17d6 6 minutes ago Running kube-scheduler 1 c4d36c2ea0973 kube-scheduler-old-k8s-version-706521
78d15db2f19f8 1611cd07b61d5 6 minutes ago Exited busybox 0 5ad541a174273 busybox
c037f98988b03 db91994f4ee8f 8 minutes ago Exited coredns 0 5e30281408552 coredns-74ff55c5b-92n49
c3bf9f0f253f7 5e32961ddcea3 8 minutes ago Exited kindnet-cni 0 1d451638078ae kindnet-5497r
6599c21c7bf2d 25a5233254979 8 minutes ago Exited kube-proxy 0 32efe757f75a6 kube-proxy-wl7dv
ecc8365583efa 05b738aa1bc63 9 minutes ago Exited etcd 0 47de197641e80 etcd-old-k8s-version-706521
ac9cc503bd572 1df8a2b116bd1 9 minutes ago Exited kube-controller-manager 0 d34409f9bf08a kube-controller-manager-old-k8s-version-706521
f0ab2d4f6b5d1 2c08bbbc02d3a 9 minutes ago Exited kube-apiserver 0 6d63beef943f2 kube-apiserver-old-k8s-version-706521
9a4bb6a19bc81 e7605f88f17d6 9 minutes ago Exited kube-scheduler 0 19f8d5767c08d kube-scheduler-old-k8s-version-706521
==> containerd <==
Jul 17 20:10:41 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:10:41.577764455Z" level=info msg="CreateContainer within sandbox \"6e2a6d49d1a5715cf1b818bce091a7d770205126fd88ff54e79d0934ff21e13e\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"73bb331d75b5a501c452db94014ce972148d9f55b65ecad0364b99c266d8803e\""
Jul 17 20:10:41 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:10:41.578491710Z" level=info msg="StartContainer for \"73bb331d75b5a501c452db94014ce972148d9f55b65ecad0364b99c266d8803e\""
Jul 17 20:10:41 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:10:41.651631308Z" level=info msg="StartContainer for \"73bb331d75b5a501c452db94014ce972148d9f55b65ecad0364b99c266d8803e\" returns successfully"
Jul 17 20:10:41 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:10:41.683055194Z" level=info msg="shim disconnected" id=73bb331d75b5a501c452db94014ce972148d9f55b65ecad0364b99c266d8803e namespace=k8s.io
Jul 17 20:10:41 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:10:41.683128721Z" level=warning msg="cleaning up after shim disconnected" id=73bb331d75b5a501c452db94014ce972148d9f55b65ecad0364b99c266d8803e namespace=k8s.io
Jul 17 20:10:41 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:10:41.683140594Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul 17 20:10:42 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:10:42.351269908Z" level=info msg="RemoveContainer for \"b3c13eb4621ad9a7f3d9d6577b7f37851d2bfe9b04d279c3e79a0d82de10e521\""
Jul 17 20:10:42 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:10:42.356653409Z" level=info msg="RemoveContainer for \"b3c13eb4621ad9a7f3d9d6577b7f37851d2bfe9b04d279c3e79a0d82de10e521\" returns successfully"
Jul 17 20:11:32 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:11:32.552876078Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:11:32 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:11:32.558848427Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Jul 17 20:11:32 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:11:32.560410459Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jul 17 20:11:32 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:11:32.560549029Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
Jul 17 20:12:03 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:03.554536536Z" level=info msg="CreateContainer within sandbox \"6e2a6d49d1a5715cf1b818bce091a7d770205126fd88ff54e79d0934ff21e13e\" for container name:\"dashboard-metrics-scraper\" attempt:5"
Jul 17 20:12:03 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:03.575190581Z" level=info msg="CreateContainer within sandbox \"6e2a6d49d1a5715cf1b818bce091a7d770205126fd88ff54e79d0934ff21e13e\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7\""
Jul 17 20:12:03 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:03.575867866Z" level=info msg="StartContainer for \"525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7\""
Jul 17 20:12:03 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:03.647657435Z" level=info msg="StartContainer for \"525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7\" returns successfully"
Jul 17 20:12:03 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:03.674569994Z" level=info msg="shim disconnected" id=525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7 namespace=k8s.io
Jul 17 20:12:03 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:03.674867352Z" level=warning msg="cleaning up after shim disconnected" id=525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7 namespace=k8s.io
Jul 17 20:12:03 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:03.674897252Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jul 17 20:12:04 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:04.569930122Z" level=info msg="RemoveContainer for \"73bb331d75b5a501c452db94014ce972148d9f55b65ecad0364b99c266d8803e\""
Jul 17 20:12:04 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:12:04.576615071Z" level=info msg="RemoveContainer for \"73bb331d75b5a501c452db94014ce972148d9f55b65ecad0364b99c266d8803e\" returns successfully"
Jul 17 20:14:16 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:14:16.553077861Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:14:16 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:14:16.558632620Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
Jul 17 20:14:16 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:14:16.560552498Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
Jul 17 20:14:16 old-k8s-version-706521 containerd[565]: time="2024-07-17T20:14:16.560681329Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
==> coredns [2dacd0bd4b06a10733d8b279ac2acb55a9c3fb1fc873a55d287429ff54cb57bd] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] 127.0.0.1:53401 - 28797 "HINFO IN 7448991953634587225.5059521389127820598. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024624911s
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0717 20:09:01.417726 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-17 20:08:31.416064427 +0000 UTC m=+0.038971327) (total time: 30.001564856s):
Trace[2019727887]: [30.001564856s] [30.001564856s] END
E0717 20:09:01.417780 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0717 20:09:01.417728 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-17 20:08:31.417320937 +0000 UTC m=+0.040227845) (total time: 30.000380971s):
Trace[1427131847]: [30.000380971s] [30.000380971s] END
E0717 20:09:01.417795 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0717 20:09:01.418163 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-17 20:08:31.416912054 +0000 UTC m=+0.039818962) (total time: 30.001233733s):
Trace[911902081]: [30.001233733s] [30.001233733s] END
E0717 20:09:01.418177 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
==> coredns [c037f98988b03d907fa22e23d7e1f407c487cd8cd95acf9f6ab83428d4b13549] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
CoreDNS-1.7.0
linux/arm64, go1.14.4, f59c03d
[INFO] 127.0.0.1:41029 - 57761 "HINFO IN 1412977635605882806.4941873869751080655. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021918506s
==> describe nodes <==
Name: old-k8s-version-706521
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=old-k8s-version-706521
kubernetes.io/os=linux
minikube.k8s.io/commit=904d419c46be1a7134dbdb5e29deb5c439653f86
minikube.k8s.io/name=old-k8s-version-706521
minikube.k8s.io/primary=true
minikube.k8s.io/updated_at=2024_07_17T20_05_27_0700
minikube.k8s.io/version=v1.33.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 17 Jul 2024 20:05:23 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: old-k8s-version-706521
AcquireTime: <unset>
RenewTime: Wed, 17 Jul 2024 20:14:11 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 17 Jul 2024 20:09:29 +0000 Wed, 17 Jul 2024 20:05:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 17 Jul 2024 20:09:29 +0000 Wed, 17 Jul 2024 20:05:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 17 Jul 2024 20:09:29 +0000 Wed, 17 Jul 2024 20:05:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 17 Jul 2024 20:09:29 +0000 Wed, 17 Jul 2024 20:05:42 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.76.2
Hostname: old-k8s-version-706521
Capacity:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022360Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 203034800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
hugepages-32Mi: 0
hugepages-64Ki: 0
memory: 8022360Ki
pods: 110
System Info:
Machine ID: 7d60da94cf1945ee86e69c18fa9bced1
System UUID: c70108e1-b695-4ef5-85f6-87de13542852
Boot ID: db84ecf5-388f-4fbb-903a-e67f1c9c3200
Kernel Version: 5.15.0-1064-aws
OS Image: Ubuntu 22.04.4 LTS
Operating System: linux
Architecture: arm64
Container Runtime Version: containerd://1.7.18
Kubelet Version: v1.20.0
Kube-Proxy Version: v1.20.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (12 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default busybox 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 6m43s
kube-system coredns-74ff55c5b-92n49 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (2%!)(MISSING) 8m40s
kube-system etcd-old-k8s-version-706521 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 8m47s
kube-system kindnet-5497r 100m (5%!)(MISSING) 100m (5%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 8m40s
kube-system kube-apiserver-old-k8s-version-706521 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m47s
kube-system kube-controller-manager-old-k8s-version-706521 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m47s
kube-system kube-proxy-wl7dv 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m40s
kube-system kube-scheduler-old-k8s-version-706521 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m47s
kube-system metrics-server-9975d5f86-xmdkg 100m (5%!)(MISSING) 0 (0%!)(MISSING) 200Mi (2%!)(MISSING) 0 (0%!)(MISSING) 6m31s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 8m39s
kubernetes-dashboard dashboard-metrics-scraper-8d5bb5db8-48n4d 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m35s
kubernetes-dashboard kubernetes-dashboard-cd95d586-xctlf 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 5m35s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 950m (47%!)(MISSING) 100m (5%!)(MISSING)
memory 420Mi (5%!)(MISSING) 220Mi (2%!)(MISSING)
ephemeral-storage 100Mi (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-32Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-64Ki 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 9m7s (x5 over 9m7s) kubelet Node old-k8s-version-706521 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m7s (x4 over 9m7s) kubelet Node old-k8s-version-706521 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m7s (x4 over 9m7s) kubelet Node old-k8s-version-706521 status is now: NodeHasSufficientPID
Normal Starting 8m48s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 8m47s kubelet Node old-k8s-version-706521 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 8m47s kubelet Node old-k8s-version-706521 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 8m47s kubelet Node old-k8s-version-706521 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 8m47s kubelet Updated Node Allocatable limit across pods
Normal NodeReady 8m40s kubelet Node old-k8s-version-706521 status is now: NodeReady
Normal Starting 8m39s kube-proxy Starting kube-proxy.
Normal Starting 6m2s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 6m2s (x8 over 6m2s) kubelet Node old-k8s-version-706521 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 6m2s (x8 over 6m2s) kubelet Node old-k8s-version-706521 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 6m2s (x7 over 6m2s) kubelet Node old-k8s-version-706521 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 6m2s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m51s kube-proxy Starting kube-proxy.
==> dmesg <==
[ +0.001108] FS-Cache: O-key=[8] '01603b0000000000'
[ +0.000746] FS-Cache: N-cookie c=00000030 [p=00000027 fl=2 nc=0 na=1]
[ +0.000991] FS-Cache: N-cookie d=00000000b058b47a{9p.inode} n=000000002bac5ede
[ +0.001160] FS-Cache: N-key=[8] '01603b0000000000'
[ +0.002667] FS-Cache: Duplicate cookie detected
[ +0.000706] FS-Cache: O-cookie c=0000002a [p=00000027 fl=226 nc=0 na=1]
[ +0.001042] FS-Cache: O-cookie d=00000000b058b47a{9p.inode} n=00000000f6861395
[ +0.001094] FS-Cache: O-key=[8] '01603b0000000000'
[ +0.000737] FS-Cache: N-cookie c=00000031 [p=00000027 fl=2 nc=0 na=1]
[ +0.000982] FS-Cache: N-cookie d=00000000b058b47a{9p.inode} n=000000005e87a84c
[ +0.001161] FS-Cache: N-key=[8] '01603b0000000000'
[ +2.750183] FS-Cache: Duplicate cookie detected
[ +0.000739] FS-Cache: O-cookie c=00000028 [p=00000027 fl=226 nc=0 na=1]
[ +0.001025] FS-Cache: O-cookie d=00000000b058b47a{9p.inode} n=0000000053a65d44
[ +0.001099] FS-Cache: O-key=[8] '00603b0000000000'
[ +0.000743] FS-Cache: N-cookie c=00000033 [p=00000027 fl=2 nc=0 na=1]
[ +0.000992] FS-Cache: N-cookie d=00000000b058b47a{9p.inode} n=000000002bac5ede
[ +0.001112] FS-Cache: N-key=[8] '00603b0000000000'
[ +0.338454] FS-Cache: Duplicate cookie detected
[ +0.000728] FS-Cache: O-cookie c=0000002d [p=00000027 fl=226 nc=0 na=1]
[ +0.001054] FS-Cache: O-cookie d=00000000b058b47a{9p.inode} n=000000005f378419
[ +0.001088] FS-Cache: O-key=[8] '06603b0000000000'
[ +0.000737] FS-Cache: N-cookie c=00000034 [p=00000027 fl=2 nc=0 na=1]
[ +0.000987] FS-Cache: N-cookie d=00000000b058b47a{9p.inode} n=00000000657fbc09
[ +0.001109] FS-Cache: N-key=[8] '06603b0000000000'
==> etcd [ac470fb543e6cbd49ef745efab21fa32e4633e2652b5503ab0d9dcdf4cacf422] <==
2024-07-17 20:10:15.883688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:10:25.883527 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:10:35.883539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:10:45.883582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:10:55.883661 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:11:05.883898 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:11:15.883642 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:11:25.883641 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:11:35.883525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:11:45.883579 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:11:55.883486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:12:05.883678 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:12:15.883653 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:12:25.883569 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:12:35.883697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:12:45.883664 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:12:55.883589 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:13:05.883745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:13:15.883570 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:13:25.883564 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:13:35.883548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:13:45.883669 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:13:55.884014 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:14:05.883971 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:14:15.883506 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> etcd [ecc8365583efa7ca9ee39f437651f17a0ca24cd3743fd251cf4fc30da39b1151] <==
raft2024/07/17 20:05:16 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
raft2024/07/17 20:05:16 INFO: ea7e25599daad906 became leader at term 2
raft2024/07/17 20:05:16 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
2024-07-17 20:05:16.848877 I | etcdserver: published {Name:old-k8s-version-706521 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
2024-07-17 20:05:16.849003 I | embed: ready to serve client requests
2024-07-17 20:05:16.850744 I | embed: serving client requests on 192.168.76.2:2379
2024-07-17 20:05:16.859797 I | embed: ready to serve client requests
2024-07-17 20:05:16.865257 I | embed: serving client requests on 127.0.0.1:2379
2024-07-17 20:05:16.865325 I | etcdserver: setting up the initial cluster version to 3.4
2024-07-17 20:05:16.876075 N | etcdserver/membership: set the initial cluster version to 3.4
2024-07-17 20:05:16.876255 I | etcdserver/api: enabled capabilities for version 3.4
2024-07-17 20:05:38.415523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:05:45.054062 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:05:55.051340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:06:05.051708 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:06:15.056588 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:06:25.051694 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:06:35.050778 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:06:45.051326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:06:55.053224 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:07:05.052900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:07:15.051405 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:07:25.050878 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:07:35.052217 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2024-07-17 20:07:45.051020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
==> kernel <==
20:14:22 up 3:56, 0 users, load average: 2.10, 1.93, 2.48
Linux old-k8s-version-706521 5.15.0-1064-aws #70~20.04.1-Ubuntu SMP Thu Jun 27 14:52:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
==> kindnet [6176790881c6ac318426c8734e414bf7336e25cd428aa7b0c94beb2965222ead] <==
I0717 20:13:21.708147 1 main.go:303] handling current node
W0717 20:13:24.636883 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0717 20:13:24.636916 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
W0717 20:13:27.811135 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:13:27.811169 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:13:31.708322 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:13:31.708374 1 main.go:303] handling current node
W0717 20:13:37.641498 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 20:13:37.641556 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0717 20:13:41.708589 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:13:41.708630 1 main.go:303] handling current node
I0717 20:13:51.708691 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:13:51.708730 1 main.go:303] handling current node
I0717 20:14:01.708188 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:14:01.708226 1 main.go:303] handling current node
W0717 20:14:08.813332 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 20:14:08.813373 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0717 20:14:11.708265 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:14:11.708310 1 main.go:303] handling current node
W0717 20:14:13.567966 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0717 20:14:13.568474 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
W0717 20:14:20.270712 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:14:20.270746 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:14:21.708560 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:14:21.708595 1 main.go:303] handling current node
==> kindnet [c3bf9f0f253f7699fdfa2b172895e923e7318b25e138545bc8ef76b8d4f5d318] <==
I0717 20:06:46.408026 1 main.go:303] handling current node
W0717 20:06:51.084059 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 20:06:51.084097 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
I0717 20:06:56.407965 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:06:56.408073 1 main.go:303] handling current node
I0717 20:07:06.408048 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:07:06.408087 1 main.go:303] handling current node
W0717 20:07:08.076819 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0717 20:07:08.076882 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
W0717 20:07:13.418616 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:07:13.418661 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:07:16.407918 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:07:16.407953 1 main.go:303] handling current node
I0717 20:07:26.408545 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:07:26.408633 1 main.go:303] handling current node
I0717 20:07:36.407945 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:07:36.407980 1 main.go:303] handling current node
W0717 20:07:43.511655 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
E0717 20:07:43.511692 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
W0717 20:07:44.185114 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:07:44.185146 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:07:46.407631 1 main.go:299] Handling node with IPs: map[192.168.76.2:{}]
I0717 20:07:46.407671 1 main.go:303] handling current node
W0717 20:07:47.983688 1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
E0717 20:07:47.983918 1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
==> kube-apiserver [75e3d044b1369fdc36ef180cbced62b1cb89b51b868314e20fa70759d089e41a] <==
I0717 20:10:57.808698 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:10:57.808755 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0717 20:11:31.936955 1 handler_proxy.go:102] no RequestInfo found in the context
E0717 20:11:31.937058 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0717 20:11:31.937144 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0717 20:11:35.116171 1 client.go:360] parsed scheme: "passthrough"
I0717 20:11:35.116215 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:11:35.116356 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:12:05.667719 1 client.go:360] parsed scheme: "passthrough"
I0717 20:12:05.667765 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:12:05.667774 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:12:40.767070 1 client.go:360] parsed scheme: "passthrough"
I0717 20:12:40.767112 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:12:40.767121 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:13:22.307638 1 client.go:360] parsed scheme: "passthrough"
I0717 20:13:22.307684 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:13:22.307693 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0717 20:13:30.010495 1 handler_proxy.go:102] no RequestInfo found in the context
E0717 20:13:30.010667 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0717 20:13:30.010706 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0717 20:14:06.824563 1 client.go:360] parsed scheme: "passthrough"
I0717 20:14:06.824634 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:14:06.824646 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
==> kube-apiserver [f0ab2d4f6b5d117a710f34ef18e3fbe9cf8d12d751308813a4485f38f76fe1fc] <==
I0717 20:05:24.301903 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0717 20:05:24.331624 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0717 20:05:24.337023 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0717 20:05:24.337049 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0717 20:05:24.795104 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0717 20:05:24.832825 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0717 20:05:24.983744 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
I0717 20:05:24.985321 1 controller.go:606] quota admission added evaluator for: endpoints
I0717 20:05:24.990960 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0717 20:05:25.998593 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0717 20:05:26.457148 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0717 20:05:26.505031 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0717 20:05:35.011421 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0717 20:05:42.046986 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0717 20:05:42.048501 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0717 20:05:58.142819 1 client.go:360] parsed scheme: "passthrough"
I0717 20:05:58.142865 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:05:58.142874 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:06:40.313983 1 client.go:360] parsed scheme: "passthrough"
I0717 20:06:40.314026 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:06:40.314035 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0717 20:07:18.992207 1 client.go:360] parsed scheme: "passthrough"
I0717 20:07:18.992261 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 <nil> 0 <nil>}] <nil> <nil>}
I0717 20:07:18.992270 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0717 20:07:50.663999 1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
==> kube-controller-manager [54ecf5505abec3f28d1bbc465f5c90be25925cc47129eceadbcc6f5b6ad0faa2] <==
W0717 20:09:53.399293 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:10:20.335697 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:10:25.049807 1 request.go:655] Throttling request took 1.048610098s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
W0717 20:10:25.901223 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:10:50.837609 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:10:57.551740 1 request.go:655] Throttling request took 1.048274457s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0717 20:10:58.403120 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:11:21.339782 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:11:30.053471 1 request.go:655] Throttling request took 1.048288673s, request: GET:https://192.168.76.2:8443/apis/apps/v1?timeout=32s
W0717 20:11:30.905007 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:11:51.841648 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:12:02.555618 1 request.go:655] Throttling request took 1.048490987s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
W0717 20:12:03.407083 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:12:22.343566 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:12:35.011901 1 request.go:655] Throttling request took 1.001954223s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1?timeout=32s
W0717 20:12:35.909077 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:12:52.845362 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:13:07.559511 1 request.go:655] Throttling request took 1.048328425s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0717 20:13:08.411167 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:13:23.347208 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:13:40.061003 1 request.go:655] Throttling request took 1.047795727s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0717 20:13:40.912480 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
E0717 20:13:53.849047 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0717 20:14:12.562874 1 request.go:655] Throttling request took 1.048071821s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
W0717 20:14:13.414334 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
==> kube-controller-manager [ac9cc503bd572ecbc266dec792f20aa85754f275d0b90ca698874a88f9cf46b3] <==
W0717 20:05:42.133630 1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-706521. Assuming now as a timestamp.
I0717 20:05:42.133763 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal.
I0717 20:05:42.134116 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0717 20:05:42.134548 1 event.go:291] "Event occurred" object="old-k8s-version-706521" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-706521 event: Registered Node old-k8s-version-706521 in Controller"
I0717 20:05:42.154986 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5497r"
I0717 20:05:42.155022 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wl7dv"
I0717 20:05:42.155039 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-hs8kx"
I0717 20:05:42.170017 1 shared_informer.go:247] Caches are synced for resource quota
I0717 20:05:42.177508 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-92n49"
I0717 20:05:42.216414 1 request.go:655] Throttling request took 1.048554856s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1?timeout=32s
E0717 20:05:42.304633 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"e7e3d51b-0f2d-4b57-ac36-c8acf0ce7cae", ResourceVersion:"256", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63856843526, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001cd7780), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001cd77a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001cd77c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001ca3ec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001cd7
7e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001cd7800), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001cd7840)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001c8fb00), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001c8db58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000862ee0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002fbb30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001c8dba8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0717 20:05:42.320204 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
E0717 20:05:42.329480 1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"4dd2e59f-3999-4d37-a4c3-7d80201fdd10", ResourceVersion:"272", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63856843527, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240715-585640e9\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001cd78a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001cd78c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001cd78e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001cd7900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001cd7920), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001cd7940), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240715-585640e9", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001cd7960)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001cd79a0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001c8fb60), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001c8dda8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000862fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002fbb48)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001c8ddf0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
E0717 20:05:42.354317 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"e7e3d51b-0f2d-4b57-ac36-c8acf0ce7cae", ResourceVersion:"401", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63856843526, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001868d60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001868d80)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001868da0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001868dc0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001868de0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40017b3280), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001868e00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001868e20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001868e60)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40015cc720), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40012b37f8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000b7ed90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40006e3470)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40012b3848)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
I0717 20:05:42.620425 1 shared_informer.go:247] Caches are synced for garbage collector
I0717 20:05:42.627423 1 shared_informer.go:247] Caches are synced for garbage collector
I0717 20:05:42.627446 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0717 20:05:43.018483 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I0717 20:05:43.018554 1 shared_informer.go:247] Caches are synced for resource quota
I0717 20:05:43.782094 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
I0717 20:05:43.794611 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-hs8kx"
I0717 20:07:50.179534 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
I0717 20:07:50.236319 1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
E0717 20:07:50.265978 1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
E0717 20:07:50.392810 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
==> kube-proxy [6599c21c7bf2d8cb71f4fe2afdea79aa5052c3a4fa5293b4fe227ddec48fba15] <==
I0717 20:05:43.117335 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0717 20:05:43.117470 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0717 20:05:43.154062 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0717 20:05:43.159216 1 server_others.go:185] Using iptables Proxier.
I0717 20:05:43.159492 1 server.go:650] Version: v1.20.0
I0717 20:05:43.163213 1 config.go:315] Starting service config controller
I0717 20:05:43.163244 1 shared_informer.go:240] Waiting for caches to sync for service config
I0717 20:05:43.165661 1 config.go:224] Starting endpoint slice config controller
I0717 20:05:43.165676 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0717 20:05:43.263335 1 shared_informer.go:247] Caches are synced for service config
I0717 20:05:43.265802 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-proxy [66c38038a889aa2cdbb136b72bd9f9812cf5f8341949a7ac8853a9666dac2c1e] <==
I0717 20:08:31.571603 1 node.go:172] Successfully retrieved node IP: 192.168.76.2
I0717 20:08:31.571681 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
W0717 20:08:31.608095 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0717 20:08:31.608518 1 server_others.go:185] Using iptables Proxier.
I0717 20:08:31.608968 1 server.go:650] Version: v1.20.0
I0717 20:08:31.609607 1 config.go:315] Starting service config controller
I0717 20:08:31.611069 1 shared_informer.go:240] Waiting for caches to sync for service config
I0717 20:08:31.611046 1 config.go:224] Starting endpoint slice config controller
I0717 20:08:31.611435 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0717 20:08:31.711424 1 shared_informer.go:247] Caches are synced for service config
I0717 20:08:31.711712 1 shared_informer.go:247] Caches are synced for endpoint slice config
==> kube-scheduler [0c9597e9f8bc3953181b111f73371bf5f88cdd9e3b431fe39689883c8cf0e1c3] <==
I0717 20:08:24.019934 1 serving.go:331] Generated self-signed cert in-memory
W0717 20:08:28.956302 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0717 20:08:28.957338 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0717 20:08:28.957372 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0717 20:08:28.957379 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0717 20:08:29.396765 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0717 20:08:29.398859 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0717 20:08:29.402884 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0717 20:08:29.402979 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0717 20:08:29.598969 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kube-scheduler [9a4bb6a19bc810d6af603abb2c44044a082a5470c93001e9f26a52ca0e4a9697] <==
W0717 20:05:23.484708 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0717 20:05:23.484926 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0717 20:05:23.485001 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0717 20:05:23.485058 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0717 20:05:23.564801 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0717 20:05:23.565038 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0717 20:05:23.573555 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0717 20:05:23.574843 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
E0717 20:05:23.588397 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0717 20:05:23.589067 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0717 20:05:23.592541 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0717 20:05:23.592830 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0717 20:05:23.593022 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0717 20:05:23.593261 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0717 20:05:23.593413 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0717 20:05:23.593467 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0717 20:05:23.597320 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0717 20:05:23.597529 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0717 20:05:23.599033 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0717 20:05:23.599313 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0717 20:05:24.467948 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0717 20:05:24.503524 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0717 20:05:24.506647 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0717 20:05:24.635586 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0717 20:05:25.175149 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
==> kubelet <==
Jul 17 20:12:47 old-k8s-version-706521 kubelet[662]: E0717 20:12:47.552573 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:12:48 old-k8s-version-706521 kubelet[662]: I0717 20:12:48.551885 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7
Jul 17 20:12:48 old-k8s-version-706521 kubelet[662]: E0717 20:12:48.552271 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:13:01 old-k8s-version-706521 kubelet[662]: E0717 20:13:01.552554 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:03 old-k8s-version-706521 kubelet[662]: I0717 20:13:03.551837 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7
Jul 17 20:13:03 old-k8s-version-706521 kubelet[662]: E0717 20:13:03.552206 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:13:14 old-k8s-version-706521 kubelet[662]: E0717 20:13:14.555362 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:18 old-k8s-version-706521 kubelet[662]: I0717 20:13:18.551892 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7
Jul 17 20:13:18 old-k8s-version-706521 kubelet[662]: E0717 20:13:18.552316 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:13:25 old-k8s-version-706521 kubelet[662]: E0717 20:13:25.554625 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: I0717 20:13:32.551838 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7
Jul 17 20:13:32 old-k8s-version-706521 kubelet[662]: E0717 20:13:32.553357 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:13:39 old-k8s-version-706521 kubelet[662]: E0717 20:13:39.552513 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: I0717 20:13:47.551778 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7
Jul 17 20:13:47 old-k8s-version-706521 kubelet[662]: E0717 20:13:47.552150 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:13:50 old-k8s-version-706521 kubelet[662]: E0717 20:13:50.552908 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:14:01 old-k8s-version-706521 kubelet[662]: I0717 20:14:01.551798 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7
Jul 17 20:14:01 old-k8s-version-706521 kubelet[662]: E0717 20:14:01.552159 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:14:02 old-k8s-version-706521 kubelet[662]: E0717 20:14:02.552503 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
Jul 17 20:14:12 old-k8s-version-706521 kubelet[662]: I0717 20:14:12.551757 662 scope.go:95] [topologymanager] RemoveContainer - Container ID: 525c1fa74d7e65e148f32d665f21c5997d87346c97c0dc124f38ba7b59caf1e7
Jul 17 20:14:12 old-k8s-version-706521 kubelet[662]: E0717 20:14:12.552555 662 pod_workers.go:191] Error syncing pod 0db4799d-e07d-4111-9c9f-a1562c6f5a18 ("dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-48n4d_kubernetes-dashboard(0db4799d-e07d-4111-9c9f-a1562c6f5a18)"
Jul 17 20:14:16 old-k8s-version-706521 kubelet[662]: E0717 20:14:16.561271 662 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jul 17 20:14:16 old-k8s-version-706521 kubelet[662]: E0717 20:14:16.561637 662 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jul 17 20:14:16 old-k8s-version-706521 kubelet[662]: E0717 20:14:16.561823 662 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>} BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-rr477,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e
5-c84b-4b99-8e9c-b10f85fb5d35): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
Jul 17 20:14:16 old-k8s-version-706521 kubelet[662]: E0717 20:14:16.561971 662 pod_workers.go:191] Error syncing pod 2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35 ("metrics-server-9975d5f86-xmdkg_kube-system(2c0b90e5-c84b-4b99-8e9c-b10f85fb5d35)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
==> kubernetes-dashboard [53c03d1b6817f3639001519037c23383a1eb95c742fa51aec3e26f7aed7cff71] <==
2024/07/17 20:08:54 Starting overwatch
2024/07/17 20:08:54 Using namespace: kubernetes-dashboard
2024/07/17 20:08:54 Using in-cluster config to connect to apiserver
2024/07/17 20:08:54 Using secret token for csrf signing
2024/07/17 20:08:54 Initializing csrf token from kubernetes-dashboard-csrf secret
2024/07/17 20:08:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2024/07/17 20:08:54 Successful initial request to the apiserver, version: v1.20.0
2024/07/17 20:08:54 Generating JWE encryption key
2024/07/17 20:08:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2024/07/17 20:08:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2024/07/17 20:08:54 Initializing JWE encryption key from synchronized object
2024/07/17 20:08:54 Creating in-cluster Sidecar client
2024/07/17 20:08:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:08:54 Serving insecurely on HTTP port: 9090
2024/07/17 20:09:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:09:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:10:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:10:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:11:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:11:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:12:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:12:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:13:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2024/07/17 20:13:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
==> storage-provisioner [c4a606723ca2783a93c798d739f1c9b99fe9616b831d52853adfb4d2fff4610d] <==
I0717 20:09:17.873778 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
I0717 20:09:17.959899 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
I0717 20:09:17.960025 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0717 20:09:35.523346 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0717 20:09:35.523701 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706521_d899e3fd-4766-4ccd-8f1e-206a9beacc22!
I0717 20:09:35.525486 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"eaf00c27-79dd-42e8-b015-d37def683220", APIVersion:"v1", ResourceVersion:"877", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-706521_d899e3fd-4766-4ccd-8f1e-206a9beacc22 became leader
I0717 20:09:35.624671 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-706521_d899e3fd-4766-4ccd-8f1e-206a9beacc22!
==> storage-provisioner [d19fafa5a77f5326de3b78b5f5f3b1a43fc692f25e485103c50cd623f5ac83ad] <==
I0717 20:08:31.583313 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F0717 20:09:01.598751 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
-- /stdout --
helpers_test.go:254: (dbg) Run: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-706521 -n old-k8s-version-706521
helpers_test.go:261: (dbg) Run: kubectl --context old-k8s-version-706521 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-xmdkg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run: kubectl --context old-k8s-version-706521 describe pod metrics-server-9975d5f86-xmdkg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-706521 describe pod metrics-server-9975d5f86-xmdkg: exit status 1 (92.942188ms)
** stderr **
Error from server (NotFound): pods "metrics-server-9975d5f86-xmdkg" not found
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-706521 describe pod metrics-server-9975d5f86-xmdkg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.41s)